The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Publications Copernicus
Articles | Volume XLII-2/W9
31 Jan 2019
 | 31 Jan 2019


V. V. Kniaz, F. Remondino, and V. A. Knyaz

Keywords: generative adversarial networks, deep convolutional neural networks, cultural heritage, single image

Abstract. Fast but precise 3D reconstructions of cultural heritage scenes are becoming very requested in the archaeology and architecture. While modern multi-image 3D reconstruction approaches provide impressive results in terms of textured surface models, it is often the need to create a 3D model for which only a single photo (or few sparse) is available. This paper focuses on the single photo 3D reconstruction problem for lost cultural objects for which only a few images are remaining. We use image-to-voxel translation network (Z-GAN) as a starting point. Z-GAN network utilizes the skip connections in the generator network to transfer 2D features to a 3D voxel model effectively (Figure 1). Therefore, the network can generate voxel models of previously unseen objects using object silhouettes present on the input image and the knowledge obtained during a training stage. In order to train our Z-GAN network, we created a large dataset that includes aligned sets of images and corresponding voxel models of an ancient Greek temple. We evaluated the Z-GAN network for single photo reconstruction on complex structures like temples as well as on lost heritage still available in crowdsourced images. Comparison of the reconstruction results with state-of-the-art methods are also presented and commented.