DEEP-IMAGE-MATCHING: A TOOLBOX FOR MULTIVIEW IMAGE MATCHING OF COMPLEX SCENARIOS
Keywords: Deep-learning, Multiview image matching, 3D reconstruction, Image retrieval, SuperGlue, LightGlue, LoFTR, SIFT
Abstract. Finding corresponding points between images is a fundamental step in photogrammetry and computer vision tasks. Traditionally, image matching has relied on hand-crafted algorithms such as SIFT or ORB. However, these algorithms face challenges when dealing with multi-temporal images, varying radiometry and contents as well as significant viewpoint differences. Recently, the computer vision community has proposed several deep learning-based approaches that are trained for challenging illumination and wide viewing angle scenarios. However, they suffer from certain limitations, such as rotations, and they are not applicable to high resolution images due to computational constraints. In addition, they are not widely used by the photogrammetric community due to limited integration with standard photogrammetric software packages. To overcome these challenges, this paper introduces Deep-Image-Matching, an open-source toolbox designed to match images using different matching strategies, ranging from traditional hand-crafted to deep-learning methods (https://github.com/3DOM-FBK/deep-image-matching). The toolbox accommodates high-resolution datasets, e.g. data acquired with full-frame or aerial sensors, and addresses known rotation-related problems of the learned features. The toolbox provides image correspondences outcomes that are directly compatible with commercial and open-source software packages, such as COLMAP and openMVG, for a bundle adjustment. The paper includes also a series of cultural heritage case studies that present challenging conditions where traditional hand-crafted approaches typically fail.