The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume XL-1
https://doi.org/10.5194/isprsarchives-XL-1-301-2014
https://doi.org/10.5194/isprsarchives-XL-1-301-2014
07 Nov 2014
 | 07 Nov 2014

RGB-D Indoor Plane-based 3D-Modeling using Autonomous Robot

N. Mostofi, A. Moussa, M. Elhabiby, and N. El-Sheimy

Keywords: RGB-D Sensor, RANSAC, ICP, Visual odometry

Abstract. 3D model of indoor environments provide rich information that can facilitate the disambiguation of different places and increases the familiarization process to any indoor environment for the remote users. In this research work, we describe a system for visual odometry and 3D modeling using information from RGB-D sensor (Camera). The visual odometry method estimates the relative pose of the consecutive RGB-D frames through feature extraction and matching techniques. The pose estimated by visual odometry algorithm is then refined with iterative closest point (ICP) method. The switching technique between ICP and visual odometry in case of no visible features suppresses inconsistency in the final developed map. Finally, we add the loop closure to remove the deviation between first and last frames. In order to have a semantic meaning out of 3D models, the planar patches are segmented from RGB-D point clouds data using region growing technique followed by convex hull method to assign boundaries to the extracted patches. In order to build a final semantic 3D model, the segmented patches are merged using relative pose information obtained from the first step.