The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Share
Publications Copernicus
Download
Citation
Share
Articles | Volume XLVIII-1/W5-2025
https://doi.org/10.5194/isprs-archives-XLVIII-1-W5-2025-93-2025
https://doi.org/10.5194/isprs-archives-XLVIII-1-W5-2025-93-2025
05 Nov 2025
 | 05 Nov 2025

Deep Learning in Visual Odometry for Autonomous Driving

Luca Morelli, Paweł Trybała, Armando Vittorio Razzino, and Fabio Remondino

Keywords: PNT, Visual-SLAM, Deep learning, Local features, Simulation, DROID-SLAM

Abstract. Positioning, Navigation, and Timing (PNT) solutions are fundamental for autonomous driving, ensuring reliable localization for safe vehicle control in diverse environments. While GNSS-based systems provide absolute positioning, they become unreliable in GNSS-denied scenarios such as urban canyons or tunnels. Dead reckoning techniques, including Visual Odometry (VO), offer an alternative by estimating motion from onboard sensors. Integrating these methods with deep learning (DL) has shown potential for enhancing robustness, particularly in challenging conditions. This study, part of the VAIPOSA ESA project, investigates the performance of VO solutions under various environmental conditions using a simulation-based approach. The CARLA simulator provides controlled testing scenarios, enabling the evaluation of VO accuracy across different weather conditions, illumination changes, and dynamic environments. A synthetic stereo setup enables capturing error-free ground truth trajectories and fair evaluation of the VO methods. Multiple sequences are analyzed, reflecting real-world challenges such as poor visibility, texture variations, and occlusions. The findings highlight the influence of environmental factors and dynamic objects on VO performance and the role of DL in mitigating common failure modes.

Share