The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume XLVIII-1/W1-2023
https://doi.org/10.5194/isprs-archives-XLVIII-1-W1-2023-363-2023
https://doi.org/10.5194/isprs-archives-XLVIII-1-W1-2023-363-2023
25 May 2023
 | 25 May 2023

DEEP LEARNING TO SUPPORT 3D MAPPING CAPABILITIES OF A PORTABLE VSLAM-BASED SYSTEM

N. Padkan, R. Battisti, F. Menna, and F. Remondino

Keywords: vSLAM, Real-Time, Low-Cost, Mobile Mapping, Deep Learning, Image Segmentation, Moocular Depth Estimation

Abstract. The use of vision-based localization and mapping techniques, such as visual odometry and SLAM, has become increasingly prevalent in the field of Geomatics, particularly in mobile mapping systems. These methods provide real-time estimation of the 3D scene as well as sensor's position and orientation using images or LiDAR sensors mounted on a moving platform. While visual odometry primarily focuses on the camera's position, SLAM also creates a 3D reconstruction of the environment. Conventional (geometric) and learning-based approaches are used in visual SLAM, with deep learning networks being integrated to perform semantic segmentation, object detection and depth prediction. The goal of this work is to report ongoing developments to extend the GuPho stereo-vision SLAM-based system with deep learning networks for tasks such as crack detection, obstacle detection and depth estimation. Our findings show how a neural network can be coupled to SLAM sequences in order to support 3D mapping application with semantic information.