«An Autonomous Vision-Guided Helicopter Omead Amidi August 1996 Department of Electrical and Computer Engineering Carnegie Mellon University ...»
High speed processors are making visual feedback increasingly more practical in many control applications. There has been significant development in visual control of manipulators carrying small cameras or eye-in-hand configuration. Researchers at Carnegie Mellon University’s Robotics Institute have demonstrated real-time visual tracking of arbitrary 3D objects traveling at unknown 2D velocities using a direct-drive manipulator arm . The Yale spatial robot juggler [ 161 has demonstrated transputer-based stereo vision for locating juggling balls in real time. Real-time tracking and interception of objects using a manipulator [ 171 have also been demonstrated based on fusion of the visual feedback and acoustic sensing.
RAPiD and DROID [ 181, developed by Roke Manor Research Limited, are systems designed for vision-based position estimation in unknown environments. RAPiD is a model-based tracker capable of extracting the position and orientation of known objects in the scene. DROID is a feature-based system which uses the structure-from-motion principle for extracting scene structure using image sequences. Real-time implementations of these systems have been demonstrated using dedicated hardware.
1.3.3 Autonomous Systems Integrating efficient model-based and learning techniques with powerful hardware architectures has produced an array of autonomous land and air vehicles. Significant advances in autonomous automobiles have demonstrated vision-based control at highway speeds. Most notable are Carnegie Mellon’s autonomous ground vehicle projects (Navlab [ 19], Automated Highway System [2 I], Unmanned Ground Vehicle project ) and the work of Dickmanns involved with the European PROMETHEUS project  at the University of Bundeswer, Munich.
Dickmanns applies an approach exploiting spatio-temporal models of objects in the world to control autonomous land and air vehicles . He has demonstrated autonomous position estimation for an aircraft in landing approach using a video camera, inertial gyros and an air velocity meter. Visionbased state estimation is also pursued at NASA Ames Research Center  using parallel implementation of multi-sensor range estimation for helicopter flight.
An aerial robot competition sponsored by Association for Unmanned Vehicle Systems, described in , has recently encouraged the development of a number of small model vertical takeoff autonomous robots. The competition task requires flying robots to autonomously carry small objects from one location to another. Recently, tasks requiring on-board vision, such as object identification, are being added to the competition requirements.
Most notable competitors are teams from Stanford University, University of Southern California (USC)  and , Georgia Institute of Technology , and University of Texas at Arlington (UTA). Researchers at Stanford are concentrating on carrier-based GPS technology for helicopter position and attitude estimation. The USC team is approaching the helicopter control problem based on a behavioral paradigm and low complexity vision to aid in helicopter navigation. The Georgia Tech robot supports on-board sensors and flight control systems. Mission planning and vision tracking are performed off-board using a ground station. UTA researchers have developed a vertical takeoff aircraft which uses a responsive rigid propeller instead of the traditional articulated helicopter rotor blade designs. They have integrated control, navigation, and communication systems on-board their aircraft for autonomous operation.
The main goal of this dissertation is the development of an airworthy autonomous helicopter system which employs vision as its primary source of guidance and control. This goal is pursued by developing: a high-level position estimation algorithm through a visual odometer, a real-time and low latency vision machine architecture for on-board system implementation, and an array of experimental testbeds to incrementally design and evaluate each system component.
1.4.1 Vision-Based Position Estimation
A visual odometer locks on to and tracks feature rich objects to sense helicopter motion. The odometer maintains this visual lock using high speed image correlators which estimate helicopter range and motion relative to the ground objects. The odometer closely integrates on-board attitude sensors with the image correlators to resolve 3D helicopter translation which is key to accurate helicopter control.
The visual odometer implements an object tracking algorithm. Viewing the ground through a pair of on-board cameras, the algorithm locks on and tracks feature-rich objects appearing in image windows or templates. As shown in Figure 1-1, the algorithm initially locks on to objects appearing at the image center and maintains this lock while the objects are in the field of view. As the objects leave the image, the algorithm selects another image template to lock on to and continues positioning the helicopter. Image templates are tracked by high-speed image correlation or template matching. For full 3D motion estimation, the algorithm matches templates in both camera images simultaneously for stereo range detection, and in successive images, for velocity estimation.
Since helicopter attitude and height variations can significantly affect the appearance of tracked objects in successive images, the algorithm must actively update image templates for robust tracking.
Furthermore, the algorithm must sense helicopter translation for accurate control which requires eliminating the effects of rotation on image displacements. The algorithm accomplishes these difficult tasks by tracking multiple templates and by measuring helicopter attitude with on-board angular sensors. The relative motion of two tracked templates in images determines height and heading changes
Chapter 1. IntroductionApproach
which are used for scaling or rotating tracked templates for consistent matches. The effects of helicopter roll and pitch variations are determined by tagging each image with helicopter attitude during camera shutter exposure interval. Using this synchronized attitude data, the algorithm estimates the effects of rotation on image displacements based on camera lens parameters. This dissertation develops custom-designed hardware to filter and tag the camera images for the tracking algorithm.
1.4.2 Real-Time and Low Latency Vision
Controlling a highly unstable plant such as a small helicopter requires frequent state feedback with minimum latency. Providing this feedback with vision at suitably high rates and with small delay can be very challenging. This is especially true for computationally complex tasks such as image correlation or template matching. Processing rates of 30-60 Hz with 1/30 second latency are experimentally determined to be sufficient for stable model helicopter control.
In spite of the growing commercial development of high-speed vision systems, many are designed for high image processing throughput with little regard to processing latency. Powerful general-purpose vision systems capable of low latency processing are too bulky and expensive for onboard integration. Furthermore, most commercial vision systems are incapable of precisely synchronizing external sensor data acquisition within the image processing pipeline. This capability is espe
8 Chapter 1. Introduction Approach
cially important to vision-based helicopter control which requires synchronized data acquisition from a variety of on-board sensors. These factors motivated the development of a new real-time and low latency image processing architecture presented in this dissertation.
The architecture’s design incorporates processing capabilities modularly. A uniform communication scheme keeps all processing and sensing components compatible. The system’s processing pipeline can be easily expanded horizontally, to incorporated more processing capabilities, or vertically, to increase data bandwidth capacity. Images flow through the system via a network of high-speed pointto-point communication links with consistent latency making up a 2D pipelined machine architecture.
The links interconnect modules, such as image convolvers or template matchers, which also have predictable performance and latency matched with other modules and the helicopter control system.
1.4.3 Experimental Testbeds
Building an autonomous robot helicopter requires careful and calibrated experimentation with real helicopters. This is difficult because helicopters are unstable and typically exhibit undesirable oscillatory behavior without active and frequent control compensation. There is also the danger of the helicopter fuselage or the spinning rotor blades crashing into nearby individuals and causing major harm during experiments.
Repeated failures due to faulty sensors, computer algorithms, or helicopter mechanics are likely during development, and careful safety measures must be in place to protect the researchers and the experimental aircraft. Outfitting a helicopter for outdoor free flight from the start can be risky since each unavoidable malfunction is a major loss of time, effort, and resources. With these concerns, this thesis develops a series of innovative testbeds for safe indoor experimentation.
Each testbed supports an electrical or a gas powered model helicopter for inexpensive and logistically manageable experiments indoors. Model helicopters are faithful reproductions of full size helicopters with respect to the crucial rotor controls and the techniques developed to control them directly apply to larger scale helicopters. In most cases, the smaller models are more agile and usually more
Chapter 1. Introduction Dissertation Overview
difficult to control than the larger and less responsive helicopters. The testbeds limit allowable helicopter travel area and attainable velocity using mechanical links and dampers, therefore reducing the risk of mechanical failure from hard landings and violent flight patterns. Minimizing the effects of these mechanical linkages on helicopter dynamics is the main challenge in designing such testbeds.
A chronological progression of indoor testbeds to outdoor flight is shown in Figure 1-2. Experiments with small model helicopters were started with an attitude control testbed (a) limiting helicopter motion to only one rotational axis and continued with a six-degree-of-freedom testbed (b) which allowed full helicopter motion in a semi-spherical area. The experiments with the small model helicopters were expanded to another indoor testbed (c) housing a mid-sized helicopter, the Yamaha R50, which is also employed for the outdoor prototype autonomous helicopter (d).
Beyond addressing the safety issues, the indoor testbeds are designed to provide several other important capabilities. Each testbed is outfitted with non-intrusive sensors for accurate measurement of helicopter ground-truth position and attitude for quantitative performance evaluation. With this ground-truth feedback, experiments on key components such as position estimation algorithms or sensor platforms can be performed independently allowing precise analysis of different components under controlled conditions.
1.5 Dissertation Overview
The dissertation is divided into six chapters. Chapter 2 provides a high-level view of the dissertation’s vision-based position estimation by describing a visual odometer positioning algorithm. The chapter describes the issues involved in helicopter positioning and the techniques employed to address these issues.
Building on vision-based techniques, Chapter 3 addresses design issues in building vision systems for high speed applications such as helicopter control. The chapter introduces a reconfigurable vision machine architecture designed for low latency and real-time image processing and describes the architecture by analyzing a visual odometer machine developed for on-board helicopter position estimation.
Chapter 4 and Chapter 5 present indoor and outdoor experiments to build a prototype visionguided autonomous helicopter. Chapter 4 presents indoor design and evaluation of an autonomous helicopter system. The chapter presents the development of an indoor testbed for the Yamaha R50 helicopter which is employed to verify a PD-based helicopter controller and to prove the effectiveness of the visual odometer machine in positioning the helicopter. Chapter 5 presents the outdoor flight experiments using a fully integrated autonomous helicopter supporting on-board vision, GPS, and real-time control systems. Results of outdoor free flight tests are presented and vision-based helicopter positioning and control performance are evaluated using accurate carrier-phase GPS receivers.
Finally, Chapter 6 presents conclusions and presents preliminary experiments to probe future research directions of the work presented in this dissertation. The future research directions presented include vision-based object tracking, helicopter positioning in known environments, and vision machine designs for other applications such as factory inspection and industrial robotics.
The control of an autonomous helicopter is only as good as its positioning. Control response and accuracy is, in essence, dictated by how frequently and how promptly the helicopter’s position is determined during flight. Positioning the helicopter can be either global or relative depending on the task at hand. Global positioning is necessary for long distance flight where the helicopter must reach a predetermined destination. Relative positioning, on the other hand, is necessary for precise flight in relation to objects of interest in the environment. Vision is particularly well-suited for this type of relative positioning and is the main focus of this chapter.
On-board vision can estimate helicopter motion by tracking stationary objects in the surrounding environment. Objects are displaced in consecutive images as the helicopter moves, and this displacement can be accurately measured by image processing to determine helicopter motion.