# «MULTI-CAMERA SIMULTANEOUS LOCALIZATION AND MAPPING Brian Sanderson Clipp A dissertation submitted to the faculty of the University of North Carolina ...»

simpler algorithm, which gives the same result developed by Li and Hartley (Li and Hartley, 2006). The 5 correspondences are selected by the RANSAC (Random Sample Consensus) algorithm (Bolles and Fischler, 1981). The distance between a selected feature and its corresponding epipolar line is used as an inlier criterion in the RANSAC algorithm. The essential matrix is decomposed into a skew-symmetric matrix of translation and a rotation matrix. When decomposing the essential matrix into rotation and translation the chirality constraint is used to determine the correct conﬁguration (Hartley and Zisserman, 2004). At this point, the translation is recovered up to scale.

To ﬁnd the scale of translation, we use Eq. 3.4 with RANSAC. One correspondence is randomly selected from the second camera and is used to calculate a scale value based on the constraint given in Eq. 3.4. We have also used a variant of the pbM-Estimator (Chen and Meer, 2003) to ﬁnd the initial scale estimate with similar results and speed to the RANSAC approach. This approach forms a continuous function based on the discrete scale estimates from each of the correspondences in the second camera and selects the maximum of that continuous function as the initial scale estimate.

Based on this scale factor, the translation direction and rotation of the ﬁrst camera, and the known extrinsics between the cameras, an essential matrix is generated for the second camera. Inlier correspondences in the second camera are then determined based on their distance to the epipolar lines. A linear least squares calculation of the scale factor is then made with all of the inlier correspondences from the second camera. This linear solution is reﬁned with a non-linear minimization technique using the graduated non-convexity (GNC) function (Blake and Zisserman, 1987) which takes into account the inﬂuence of all correspondences, not just the inliers of the RANSAC sample, in calculating the error. This error function measures the distance of all correspondences to their epipolar lines and smoothly varies between zero for perfect correspondence and one for an outlier with distance to the epipolar line greater than some threshold. One could just as easily take single pixel steps from the initial linear solution in the direction which maximizes inliers, or equivalently minimizing the robust error function. The non-linear minimization simply allows us to select step sizes depending on the sampled Jacobian of the error function, which should converge faster than single pixel steps and allows for sub-pixel precision.

Following reﬁnement of the scale estimate, the inlier correspondences of the second camera are calculated and their number is used to score the current RANSAC solution.

The ﬁnal stage in the scale estimation algorithm is a bundle adjustment of the multi-camera system’s motion. Inliers are calculated for both cameras and they are used in a bundle adjustment reﬁning the rotation and scaled translation of the total, multi-camera system.

While this algorithm is described for a system consisting of two cameras, it is relatively simple to extend the algorithm to use any number of rigidly mounted cameras. The RANSAC for the initial scale estimate, initial linear solution and non-linear reﬁnement are performed over correspondences from all cameras other than the camera used in the ﬁvepoint pose estimate. The ﬁnal bundle adjustment is then performed over all of the system’s cameras.

3.5 Experiments We begin with results using synthetic data to show the algorithm’s performance over varying levels of noise and different camera system motions. Following these results, we show the system operating on real data and measure its performance using data from a GPS/INS (inertial navigation system). The GPS/INS measurements are post-processed and are accurate to 4cm in position and 0.03 degrees in rotation, providing a good basis for error analysis.

We use results on synthetic data to demonstrate the performance of the 6DOF motion estimate in the presence of varying levels of normally distributed Gaussian noise on the correspondences over a variety of motions. A set of 3D points was generated within the walls of an axis-aligned cube. Each cube wall consisted of 5000 3D points randomly distributed within a 20m x 20m x 0.5m volume. The two-camera system, which has an inter-camera distance of 1.9m, a 100o angle between optical axes and non-overlapping ﬁelds of view, is initially positioned at the center of the cube, with identity rotation. A random motion for the camera system was then generated. The camera system’s rotation was generated from a uniform ±6o distribution sampled independently in each Euler angle. Additionally, the system was translated by a uniformly distributed distance of 0.4m to 0.6m in a random direction. A check for degenerate motion is performed by measuring the distance between the epipole of the second camera (see Fig. 3.3) due to rotation of the camera system and the epipole due to the combination of rotation and translation. This check can be performed because we have perfect knowledge of the camera motion in synthetic data. Only results of non-degenerate motions with epipole separations equivalent to a 5o angle between the

** Figure 3.7: Angle Between True and Estimated Rotations, Synthetic Results of 100 Samples Using Two Cameras translation vector and the rotation induced translation vector are shown.**

Results are given for 100 sample motions for each of the different values of normally distributed, zero mean Gaussian white noise added to the projections of the 3D points into the system’s cameras.

The synthetic cameras have calibration matrices and ﬁelds-of-view that match the cameras used in our real multi-camera system. Each real camera has an approximately 40o x 30o ﬁeld-of-view and a resolution of 1024 x 768 pixels.

Results on synthetic data are shown in Figures 3.7 to 3.10. One can see that the system is able to estimate the rotation (Fig. 3.7) and translation direction (Fig. 3.8) well given noise levels that could be expected using a 2D feature tracker on real data. Figure 3.9 shows a plot of Test − Ttrue / Ttrue. This ratio measures both the accuracy of the estimated translation direction, as well as the scale of the translation and would ideally have a value of zero because the true and estimated translation vectors would be the same. Given the challenges of translation estimation and the precise rotation estimation, we use this ratio as the primary performance metric for the 6DOF motion estimation algorithm. The translation vector ratio, along with the rotation error plot, demonstrate that the novel system performs well given a level of noise that could be expected in real tracking results.

** Figure 3.10: Scale Ratio, Synthetic Results of 100 Samples Using Two Cameras 3.**

5.2 Real data For a performance analysis on real data, we collected video using an eight-camera system mounted on a vehicle. The system included a highly accurate GPS/INS unit, which allows comparisons of the scaled camera system motion calculated with our method to ground truth measurements. The eight cameras have almost no overlap to maximize the total ﬁeldof-view and are arranged in two clusters facing toward the opposite sides of the vehicle. In each cluster, the camera centers are within 25cm of each other. A camera cluster is shown in Fig. 3.1. The camera clusters are separated by approximately 1.9m and the line between the camera clusters is approximately parallel with the rear axle of the vehicle. Three of the four cameras in each cluster cover a horizontal ﬁeld-of-view on each side of the vehicle of approximately 120o x 30o. A fourth camera points to the side of the vehicle and upward. Its principle axis has an angle of 30o with the horizontal plane of the vehicle, which is colinear with the optical axes of the other three cameras.

In these results on real data, we take advantage of the fact that we have six horizontal cameras and use all of the cameras to calculate the 6DOF system motion. The upward facing cameras were not used because they only recorded sky in the sequence. For each pair of frames recorded at different times, each camera in turn is selected and the ﬁve-point pose

** Figure 3.11: Angle Between True and Estimated Translation Vectors, Real Data with Six Cameras estimate is performed for that camera using correspondences found using a KLT (Lucas and Kanade, 1981) 2D feature tracker.**

The other cameras are then used to calculate the scaled motion of the camera system using the ﬁve-point estimate from the selected camera as an initial estimate of the camera system rotation and translation direction. The 6DOF motion solution for each camera selected for the ﬁve-point estimate is scored according to the fraction of inliers of all other cameras. The motion with the largest fraction of inliers is selected as the 6DOF motion for the camera system.

** In table 3.1, we show the effect of critical motions described in section 3.**

3.2 over a sequence of 200 frames. Critical motions were detected using the QDEGSAC (Frahm and Pollefeys, 2006) approach described in that section. Even with critical motion, the system degrades to the standard 5DOF motion estimation from a single camera and only the scale remains ambiguous as shown by the translation direction and rotation angle error in Figures 3.11 and 3.12. This graceful degradation to the one camera motion estimation solution means that the algorithm solves for all of the possible degrees of freedom of motion given the data provided to it.

In this particular experiment, the system appears to consistently underestimate the scale with our multi-camera system when the motion is non-critical. This is likely due to a

** Table 3.1: Relative translation vector error including angle and error of relative translation vector length mean ± std.**

dev.

combination of error in the camera system extrinsics and error in the GPS/INS ground truth measurements.

** Figure 3.15 shows the path of the vehicle mounted multi-camera system and locations where the scale can be estimated.**

From the map, it is clear that the scale cannot be estimated in straight segments as well as in smooth turns. This is due to the constant rotation rate critical motion condition described in section 3.3.2.

We selected a small section of the camera path circled in ﬁgure 3.15 and used a cali

reconstruct the motion of one of the system’s cameras. For a ground truth measure of scale error accumulation, we scaled the distance traveled by a camera between two frames at the beginning of this reconstruction to match the true scale of the camera motion according to the GPS/INS measurements. Figure 3.16 shows how error in the scale accumulates over the 200 frames (recorded at 30 frames per second) of the reconstruction. We then processed the scale estimates from the 6DOF motion estimation system with a Kalman ﬁlter to determine the scale of the camera’s motion over many frames and measured the error in the SfM reconstruction scale using only our algorithm’s scale measurements. The scale drift estimates from the 6DOF motion estimation algorithm clearly measure the scale drift and provide a measure of absolute scale.

3.6 Conclusion This chapter has introduced a novel algorithm that determines the 6DOF motion of a rigid multi-camera system with non-overlapping ﬁelds of view. We have provided a complete

0.8 0.6 0.4 0.2

analysis of the critical motions of the multi-camera system that make the absolute scale unobservable. Our algorithm can detect these critical motions and gracefully degrades to the estimation of the epipolar geometry. We have demonstrated the performance of our solution through both synthetic and real motion sequences. Additionally, we embedded our novel algorithm in a structure from motion system to demonstrate that our technique allows the determination absolute scale without requiring overlapping ﬁelds of view.

The next chapter will introduce a different minimal solution method for the six degree of freedom motion of a partially overlapping stereo camera. The approach is designed to overcome the degeneracies inherent estimating six degree of freedom motion for non-overlapping cameras by taking advantage of a small region of overlap in the rigidly mounted cameras’ ﬁelds of view.

4.1 Introduction In this chapter, we present a new minimal solution method for use in stereo camera based structure from motion (SfM) or visual simultaneous localization and mapping (VSLAM).

This novel minimal solution method overcomes the degeneracies in absolute scaled motion estimation inherent in non-overlapping rigid two camera systems, including the most common case of pure translational motion.

Our principal application is VSLAM for a humanoid robot. The approach proposed in this work is analogous to human vision where both eyes overlap in only part of the total viewing frustum. Excluding prior models humans possess, such as relative sizes of objects, expected relative positions, expected ego-motion etc, depth could be perceived from the region of overlap between our eyes while rotation is derived from both overlapping and non-overlapping regions. This conﬁguration of eyes (or cameras) provides a large total ﬁeld-of-view for the combined camera system while at the same time allowing the scale to be ﬁxed based on triangulated features in the camera’s region of overlap. This gives the best of what a two-camera system can deliver, a wide ﬁeld-of-view for accurate rotation estimation with an absolutely scaled translation measurement.