The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Share
Publications Copernicus
Download
Citation
Share
Articles | Volume XLVIII-G-2025
https://doi.org/10.5194/isprs-archives-XLVIII-G-2025-959-2025
https://doi.org/10.5194/isprs-archives-XLVIII-G-2025-959-2025
29 Jul 2025
 | 29 Jul 2025

Neural Implicit Monocular Visual SLAM for 3D Reconstruction in Planetary Environments

Chen Liu, Rong Huang, Huan Xie, Tao Tao, Yongjiu Feng, and Xiaohua Tong

Keywords: Neural Radiance Fields, Visual SLAM, Reconstruction, Planetary Environments

Abstract. The application of SLAM technology in planetary environments has become a research frontier for autonomous rovers. Existing visual SLAM methods often exhibit low accuracy in pose estimation and reconstruction due to poor feature extraction and mismatched correspondences. This paper introduces a novel strategy that integrates neural implicit networks within a visual SLAM framework. By jointly optimizing camera poses and implicit scene representations using neural radiance fields, we achieve high-precision visual localization in the Mars scene without requiring loop closure. We validate our method using data from NASA’s Perseverance rover and compare its performance with OV2SLAM. The results demonstrate that our method significantly outperforms OV2SLAM in localization accuracy, achieving an 85.16% reduction in absolute trajectory errors and maintaining translation errors within 1 m across the entire trajectory. Moreover, our framework delivers compelling novel view synthesis despite sparse inputs and a fixed forward-facing viewpoint. The 3D point cloud models, synthesized from estimated depth maps and poses, further highlight the feasibility and effectiveness of our method for reconstruction in planetary environments.

Share