The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume XLVIII-1/W1-2023
https://doi.org/10.5194/isprs-archives-XLVIII-1-W1-2023-207-2023
https://doi.org/10.5194/isprs-archives-XLVIII-1-W1-2023-207-2023
25 May 2023
 | 25 May 2023

A COMPARATIVE NEURAL RADIANCE FIELD (NERF) 3D ANALYSIS OF CAMERA POSES FROM HOLOLENS TRAJECTORIES AND STRUCTURE FROM MOTION

M. Jäger, P. Hübner, D. Haitz, and B. Jutzi

Keywords: Neural Radiance Fields, Microsoft HoloLens, Structure from Motion, Trajectory, 3D Reconstruction, Point Cloud

Abstract. Neural Radiance Fields (NeRFs) are trained using a set of camera poses and associated images as input to estimate density and color values for each position. The position-dependent density learning is of particular interest for photogrammetry, enabling 3D reconstruction by querying and filtering the NeRF coordinate system based on the object density. While traditional methods like Structure from Motion are commonly used for camera pose calculation in pre-processing for NeRFs, the HoloLens offers an interesting interface for extracting the required input data directly. We present a workflow for high-resolution 3D reconstructions almost directly from HoloLens data using NeRFs. Thereby, different investigations are considered: Internal camera poses from the HoloLens trajectory via a server application, and external camera poses from Structure from Motion, both with an enhanced variant applied through pose refinement. Results show that the internal camera poses lead to NeRF convergence with a PSNR of 25 dB with a simple rotation around the x-axis and enable a 3D reconstruction. Pose refinement enables comparable quality compared to external camera poses, resulting in improved training process with a PSNR of 27 dB and a better 3D reconstruction. Overall, NeRF reconstructions outperform the conventional photogrammetric dense reconstruction using Multi-View Stereo in terms of completeness and level of detail.