🤖 AI Summary
This work addresses the longstanding absence of a unified benchmark for evaluating cross-modal matching between infrared and visible-light images, which exhibit significant modality gaps. To this end, we introduce CM-Bench, the first standardized evaluation framework for cross-modal feature matching, systematically assessing 30 sparse, semi-dense, and dense matching algorithms. We further propose an adaptive preprocessing frontend driven by a classification network to enhance matching robustness. Additionally, we release the first infrared-to-satellite image geolocation dataset with human-annotated ground truth, enabling multi-task evaluation including pose estimation, homography computation, and geolocation. Extensive experiments validate the effectiveness of our approach, and both the dataset and evaluation toolkit are publicly released to establish a solid benchmark for the community.
📝 Abstract
Infrared-visible (IR-VIS) feature matching plays an essential role in cross-modality visual localization, navigation and perception. Along with the rapid development of deep learning techniques, a number of representative image matching methods have been proposed. However, crossmodal feature matching is still a challenging task due to the significant appearance difference. A significant gap for cross-modal feature matching research lies in the absence of standardized benchmarks and metrics for evaluations. In this paper, we introduce a comprehensive cross-modal feature matching benchmark, CM-Bench, which encompasses 30 feature matching algorithms across diverse cross-modal datasets. Specifically, state-of-the-art traditional and deep learning-based methods are first summarized and categorized into sparse, semidense, and dense methods. These methods are evaluated by different tasks including homography estimation, relative pose estimation, and feature-matching-based geo-localization. In addition, we introduce a classification-network-based adaptive preprocessing front-end that automatically selects suitable enhancement strategies before matching. We also present a novel infrared-satellite cross-modal dataset with manually annotated ground-truth correspondences for practical geo-localization evaluation. The dataset and resource will be available at: https://github.com/SLZ98/CM-Bench.