The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Publications Copernicus
Download
Citation
Articles | Volume XLVIII-3-2024
https://doi.org/10.5194/isprs-archives-XLVIII-3-2024-177-2024
https://doi.org/10.5194/isprs-archives-XLVIII-3-2024-177-2024
07 Nov 2024
 | 07 Nov 2024

Filtering Keypoints with ORB Features

Thaisa Aline Correia Garcia, Antonio Maria Garcia Tommaselli, Letícia Ferrrari Castanheiro, and Mariana Batista Campos

Keywords: ORB features, keypoints, machine learning, fisheye lenses

Abstract. Keypoint detectors and descriptors are essential for identifying points and their correspondences in overlapping images, being fundamental inputs for many subsequent processes, including Pose Estimation, Visual Odometry, vSLAM, Object Detection, Object Tracking, Augmented Reality, Image Mosaicking, and Panorama Stitching. Techniques like SIFT, SURF, KAZE, and ORB aim to identify repeatable, distinctive, efficient, and local features. Despite their robustness, some keypoints, especially those detected in fisheye cameras, do not contribute to the solution, and may introduce outliers or errors. Fisheye cameras capture a broader view, leading to more keypoints at infinity and potential errors. Filtering these keypoints is important to maintain consistent input observations. Various methods, including gradient-based sky region detection, adaptive algorithms, and K-means clustering, addressed this issue. Semantic segmentation could be an effective alternative, but it requires extensive computational resources. Machine learning provides a more flexible alternative, processing large data volumes with moderate computational power and enhancing solution robustness by filtering non-contributing keypoints already detected in these vision-based approaches. In this paper we present and assess a machine learning model to classify keypoints as sky or non-sky, achieving an accuracy of 82.1%.