«An Autonomous Vision-Guided Helicopter Omead Amidi August 1996 Department of Electrical and Computer Engineering Carnegie Mellon University ...»
An Autonomous Vision-Guided Helicopter
Department of Electrical and Computer Engineering
Carnegie Mellon University
Pittsburgh, PA 15213
Submitted to the
Department of Electrical and Computer Engineering
in partial fu&llment of the requirementsfor
the degree of Doctor o Philosophy
0 1996 Omead Amidi
This research was partly supported by SECOM security company and Yamaha Motor Company. The views and conclusions contained in this document are those of the author and should not be interpreted as representing the official policies, either expressed or implied, of SECOM or Yamaha Motor Company.
. L' Keywords: helicopter, autonomous, vision-based navigation, real-time image processing.
Abstract Helicopters are indispensable air vehicles for many applications ranging from rescue and crime fighting to inspection and surveillance. They are most effective when flown at close proximity to objects of interest while performing tasks such as delivering critical supplies, rescuing stranded individuals, or inspecting damaged buildings. These tasks require dangerous flight patterns which risk human pilot safety. An unmanned helicopter which operates autonomously can carry out such tasks more effectively without risking human lives. The work presented in this dissertation develops an autonomous helicopter system for such applications. The system employs on-board vision for stability and guidance relative to objects of interest in the environment.
Developing a vision-based helicopter positioning and control system is challenging for several reasons. First, helicopters are inherently unstable and capable of exhibiting high acceleration rates. They are highly sensitive to control inputs and require high frequency feedback with minimum delay for stability. For stable hovering, for example, vision-based feedback rates must be at least 30-60 Hz with no more than 1/30 second latency. Second, since helicopters rotate at high angular rates to direct main rotor thrust for translational motion, it is difficult to disambiguate rotation from translation with vision alone to estimate helicopter 3D motion. Third, helicopters have limited on-board power and payload capacity. Vision and control systems must be compact, efficient, and light weight for effective on-board integration. Finally, helicopters are extremely dangerous and present major obstacles to safe and calibrated experimentation to design and evaluate on-board systems, This dissertation addresses these issues by developing: a “visual odometer” for helicopter position estimation, a real-time and low latency vision machine architecture to implement an on-board visual odometer machine, and an array of innovative indoor testbeds for calibrated experimentation to design, build and demonstrate an airworthy vision-guided autonomous helicopter. The odometer visually locks on to ground objects viewed by a pair of on-board cameras. Using high-speed image template matching, it estimates helicopter motion by sensing object displacements in consecutive images. The visual odometer is implemented with a custom-designed real-time and low latency vision machine which modularly integrates field rate (60 Hz) template matching processors, synchronized attitude sensing and image tagging circuitry, and image acquisition, convolution, and display hardware. The visual odometer machine along with a carrier-phase differential Global Positioning System receiver, a classical PD control system, and human augmentation and safety systems are integrated on-board a mid-sized helicopter, the Yamaha R50, for vision-guided autonomous flight.
Acknowledgments It has been a privilege to work with my advisor, Dr. Take0 Kanade. I am thankful for his support and teaching during my thesis work. He taught me how to build real working systems by persistently following every lead and attending to every detail with critical attention.
I would like to thank my committee members, Dr. Charles Thorpe, Dr. Charles Neuman, and Dr. Lee Weiss for their advice and technical insight. I’m thankful to Dr.
Charles Thorpe for his guidance on a day to day basis and his generous sharing of his group’s resources for my work. In particular, I am thankful for the use the Navlab autonomous vehicle for helicopter experiments.
I would like to thank, Mark Delouis, for assisting me through the span of my graduate work. He built all on-board helicopter mechanical and safety components and developed revolutionary hardware for the indoor helicopter testbeds. He mastered the challenging task of helicopter remote control and served as my safety pilot during every experiment. I am truly in debt of his diligence, patience, and kindness.
Finally, I would like to thank Keisuke Fujita and Yuji Mesaki of SECOM security company who supported my work during their two year stay at CMU Robotics Institute. They assisted me in the development of the on-board global positioning and image processing hardware.
~~ I1. Chapter 2. Vision-Based Helicopter Positioning for Autonomous Control......... 13
Precise maneuverability of helicopters makes them useful for many critical tasks ranging from rescue and security to inspection and monitoring operations. Helicopters are indispensable air vehicles for finding and rescuing stranded individuals or transporting accident victims. Police departments use them to find and pursue criminals. Fire fighters use helicopters for precise delivery of fire extinguishing chemicals to forest fires. More and more electric power companies are using helicopters to inspect towers and transmission lines for corrosion and other defects and to subsequently make repairs. All of these applications demand dangerous close proximity flight patterns, risking human pilot safety. An unmanned autonomous helicopter will eliminate such risks and will increase the helicopter’s effectiveness.
Typical missions of autonomous helicopters require flying at low speeds to follow a path or to hovering near an object of interest. Accurate position estimation of the helicopter relative to objects is necessary to perform such tasks. In general, such positioning equipment as inertial navigation systems or global positioning systems are well suited for long range, low precision helicopter flight and fall short for very precise, close proximity flight. Moreover these sensors estimate absolute position and cannot sense position in relation to task objects of interest. Visual sensing is the richest source of data for this relative position estimation.
Challenges of Vision-Based Helicopter Flight
The work presented in this dissertation demonstrates stable helicopter control based primarily on visual feedback. An “eye-in-the-sky” robot helicopter is developed which can perform missions outdoors while flying autonomously. The helicopter can fly precisely and at close proximity to ground objects by maintaining its relative location by on-board vision.
1.1 Challenges of Vision-Based Helicopter Flight
Helicopters are inherently unstable and require constant compensation for stable flight. The effectiveness of an autonomous helicopter is critically dependent on its accurate and stable positioning relative to objects in the environment. Estimating this relative position by on-board vision, a 3D object tracking problem, is difficult for several reasons.
. Helicopters can move quickly. Small and mid-sized helicopters can accelerate in the range of 0.5g and can exhibit 40-60 degrees per second angular velocity under normal operating conditions. To keep up with the helicopter’s high degree of maneuverability, an on-board vision system must sample and process camera images at high frequency. On-board image processing must be performed at frame rate (30 Hz) or higher for effective vision-based object tracking. Higher rate image sampling also simplifies the tracking problem by limiting object displacements in successive images.
Helicopters are highly sensitive to control inputs. Feedback latency is critical to stable helicopter flight. High throughput of image processing alone is not sufficient. Object tracking must be performed with minimum latency to provide adequate and timely feedback for stability. Small model helicopters require system latencies of no more than 1/30 to 1/60 seconds for stability.
. Helicopters move with typically significant attitude variations. A helicopter can bank 30 degrees as it transitions to forward flight. To maintain relative position, on-board vision must distinguish helicopter translation from rotation. Distinguishing rotation from translation in images under perspective projection can be difficult since small attitude variations can look virtually indistinguishable from small translational motion. This effect is exaggerated for the helicopter application since
tracked objects are frequently small relative to the helicopter altitude and cannot provide sufficient 3D clues for distinguishing rotation from translation.
Helicopters have strictly limited payloads and available power. A vision system capable of meeting the above criteria must also be compact and efficient for practical on-board integration. Small ( 200 Ibs) helicopter payloads range from 5-40 pounds.
Helicopters are dangerous. The spinning rotor blades pose an immediate danger to nearby individuals. The responsive nature of helicopters makes them prone to out-of-control flight or crashes during experiments.
The research presented in this dissertation addresses these challenges by developing an autonomous vision-guided helicopter control system.
The three contributions of this dissertation are:
1. The first autonomous robot helicopter stabilized and guided by an on-board “visual odometer” for position estimation: The odometer visually locks on to ground objects and maintains helicopter position at field rate (60 Hz) during flight. The helicopter integrates the visual odometer with sensors such as gyroscopes and a global positioning system (GPS) receiver, control and actuation, as well as safety and human augmentation systems.
2. A new vision machine architecture for real-time and low latency image processing: The architecture balances computational power and data bandwidth requirements to realize vision machines tailored to the applications at hand. Based on this architecture, a visual odometer machine is designed and realized on-board an autonomous helicopter.
3. Innovative testbeds for effective indoor experimentation with helicopters: Most significant is an indoor six-degree-of-freedom testbed built with light-weight composite material to support an electrical model helicopter. The testbed provides safety by preventing helicopter crashes and measures helicopter ground-truth position during flight for calibrated experiments.
Chapter 1. IntroductionRelated Work
1.3 Related Work
Building an autonomous helicopter system requires research in helicopter control as well as in helicopter position sensing. While this dissertation focuses on the position sensing aspect of the problem, it is important to recognize and employ existing work on helicopter control, visual servoing, and autonomous robotic systems to realize a working vision-guided robot helicopter.
1.3.1 Helicopter Control
The study of the helicopter control problem is not new. Helicopter dynamic modeling is well documented in the literature. In particular, Prouty [I] and Johnson  present excellent comprehensive studies of helicopter aerodynamical models and stability analysis. Overcoming the inherent instability of helicopters has been the focus of a large body of research, including detailed mathematical models (e.g., ) for control and Kalman filtering of multiple sensor data for state estimation (e.g.,). The controller design methods range from linear quadratic (LQ) design to H infinity design , , and  and predictive control . For example, a stable closed loop control system has been formulated  by quadratic synthesis techniques for helicopter autolanding.
Incorporation of a human pilot model has been attempted based on quadratic optimal Cooperative Control Synthesis . This model is used for control augmentation where the control system cooperates with the pilot to increase aircraft performance. The sophisticated pilot model developed by [lo] attempts to describe the human’s ability to look ahead. This ability is crucial to precise low-altitude helicopter control. While it is difficult to identify and verify these models, they provide a valuable basis for an intelligent helicopter controller, especially in the design of low-level control loops.
Manned flight tests of helicopter controllers have also been conducted. Notable implemented systems include those at NASA Ames Research Center , NASA Langley Research Center 191, and military aircraft manufacturers [ 1 11. Fuzzy controllers have been successfully employed for helicopter flight experiments. In Japan, Sugeno’s group at Tokyo Institute of Technology  has demonstrated helicopter control using fuzzy logic.
1.3.2 Controlling with Vision The positioning feedback for the above helicopter control experiments is primarily provided by onboard INS/GPS or ground-based beacon systems instead of on-board computer vision. The computational complexity and the high data bandwidth requirements of vision have been major obstacles to practical and robust vision-based positioning and control systems. In spite of these drawbacks, promising results have been recently demonstrated in real-time vision processing, visual servoing of robotic manipulators, and accurate vision-based position estimation systems.
The development of low-cost special-purpose image correlation chips and multi-processor architectures capable of high communication rates has made a great impact on image processing. Examples of vision systems built from special-purpose hardware include transputer-based image hardware for two-dimensional object tracking [ 131, and real-time tracking and depth map generation using correlation chips [ 141.