The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume XLII-2
https://doi.org/10.5194/isprs-archives-XLII-2-513-2018
https://doi.org/10.5194/isprs-archives-XLII-2-513-2018
30 May 2018
 | 30 May 2018

DEEP LEARNING FOR LOWTEXTURED IMAGE MATCHING

V. V. Kniaz, V. V. Fedorenko, and N. A. Fomin

Keywords: image matching, deep convolutional neural networks, auto-encoders, cultural heritage

Abstract. Low-textured objects pose challenges for an automatic 3D model reconstruction. Such objects are common in archeological applications of photogrammetry. Most of the common feature point descriptors fail to match local patches in featureless regions of an object. Hence, automatic documentation of the archeological process using Structure from Motion (SfM) methods is challenging. Nevertheless, such documentation is possible with the aid of a human operator. Deep learning-based descriptors have outperformed most of common feature point descriptors recently. This paper is focused on the development of a new Wide Image Zone Adaptive Robust feature Descriptor (WIZARD) based on the deep learning. We use a convolutional auto-encoder to compress discriminative features of a local path into a descriptor code. We build a codebook to perform point matching on multiple images. The matching is performed using the nearest neighbor search and a modified voting algorithm. We present a new “Multi-view Amphora” (Amphora) dataset for evaluation of point matching algorithms. The dataset includes images of an Ancient Greek vase found at Taman Peninsula in Southern Russia. The dataset provides color images, a ground truth 3D model, and a ground truth optical flow. We evaluated the WIZARD descriptor on the “Amphora” dataset to show that it outperforms the SIFT and SURF descriptors on the complex patch pairs.