DATA FUSION OF LIDAR INTO A REGION GROWING STEREO ALGORITHM
Keywords: Stereo vision, LIDAR, Multisensor data fusion, calibration, 3D reconstruction
Abstract. Stereo vision and LIDAR continue to dominate standoff 3D measurement techniques in photogrammetry although the two techniques are normally used in competition. Stereo matching algorithms generate dense 3D data, but perform poorly on low-texture image features. LIDAR measurements are accurate, but imaging requires scanning and produces sparse point clouds. Clearly the two techniques are complementary, but recent attempts to improve stereo matching performance on low-texture surfaces using data fusion have focused on the use of time-of-flight cameras, with comparatively little work involving LIDAR.
A low-level data fusion method is shown, involving a scanning LIDAR system and a stereo camera pair. By directly imaging the LIDAR laser spot during a scan, unique stereo correspondences are obtained. These correspondences are used to seed a regiongrowing stereo matcher until the whole image is matched. The iterative nature of the acquisition process minimises the number of LIDAR points needed. This method also enables simple calibration of stereo cameras without the need for targets and trivial coregistration between the stereo and LIDAR point clouds. Examples of this data fusion technique are provided for a variety of scenes.