3D Reconstruction via Depth and Normal Priors Guided 3D Gaussian Splatting
Keywords: 3D reconstruction, 3D Gaussian Splatting, Novel View Synthesis, Large-Scale Scene Reconstruction
Abstract. This paper introduces a novel 3D reconstruction method that leverages depth and normal priors within a 3D Gaussian splatting framework. The approach aims to address the limitations of traditional 3D reconstruction methods, which often involve complex pipelines, high computational and storage demands, and detail loss. Our method begins by constructing a low-precision global Gaussian radiance field, followed by adaptive scene and data partitioning to enhance optimization efficiency while maintaining load balance. We integrate AI-powered depth and normal estimation techniques to establish geometric priors, which effectively reduce artifacts, accelerate convergence, and improve synthesis quality under sparse views. Furthermore, we propose a constraint mechanism based on the shape and opacity of Gaussians to suppress floating artifacts and enhance model robustness. Experimental results demonstrate that our method achieves better reconstruction quality, and strong generalization capabilities for large-scale 3D reconstruction.
