The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume XLVIII-2/W3-2023
https://doi.org/10.5194/isprs-archives-XLVIII-2-W3-2023-115-2023
https://doi.org/10.5194/isprs-archives-XLVIII-2-W3-2023-115-2023
12 May 2023
 | 12 May 2023

DOUBLE NERF: REPRESENTING DYNAMIC SCENES AS NEURAL RADIANCE FIELDS

V. V. Kniaz, V. A. Knyaz, A. Bordodymov, P. Moshkantsev, D. Novikov, and S. Barylnik

Keywords: nerual radiance fields, novel view synthesis, 3D scene reconstruction

Abstract. Neural Radiance Fields (NeRFs) are non-convolutional neural models that learn 3D scene structure and color to produce novel images of a given scene from a new view point. NeRFs are closely related to such photogrammetric problems as camera pose estimation and bundle adjustment. NeRF takes a number of oriented cameras and photos as an input and learns a function that maps a 5D pose vector to an RGB color and volume destiny at point. The estimated function can be used to draw an image using a volume rendering pipeline. Still NeRF have a major limitation: they can not be used for dynamic scene synthesis. We propose a modified NeRF framework that can represent a dynamic scene as a superposition of two or more neural radiance fields. We consider a simple dynamic scene consisting of a static background scene and moving object with a static shape. We implemented our DoubleNeRF model using TensorFlow library. The results of evaluation are encouraging and demonstrate that our DoubleNeRF model achieves and surpasses the state of the art in the dynamic scene synthesis. Our framework includes two neural radiance fields for a background scene and dynamic objects. The evaluation of the model demonstrates that it can be effectively used for synthesis of photorealistic dynamic image sequence and videos.