CM-Bench: A Comprehensive Cross-Modal Feature Matching Benchmark Bridging Visible and Infrared Images

📅 2026-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the longstanding absence of a unified benchmark for evaluating cross-modal matching between infrared and visible-light images, which exhibit significant modality gaps. To this end, we introduce CM-Bench, the first standardized evaluation framework for cross-modal feature matching, systematically assessing 30 sparse, semi-dense, and dense matching algorithms. We further propose an adaptive preprocessing frontend driven by a classification network to enhance matching robustness. Additionally, we release the first infrared-to-satellite image geolocation dataset with human-annotated ground truth, enabling multi-task evaluation including pose estimation, homography computation, and geolocation. Extensive experiments validate the effectiveness of our approach, and both the dataset and evaluation toolkit are publicly released to establish a solid benchmark for the community.

Technology Category

Application Category

📝 Abstract
Infrared-visible (IR-VIS) feature matching plays an essential role in cross-modality visual localization, navigation and perception. Along with the rapid development of deep learning techniques, a number of representative image matching methods have been proposed. However, crossmodal feature matching is still a challenging task due to the significant appearance difference. A significant gap for cross-modal feature matching research lies in the absence of standardized benchmarks and metrics for evaluations. In this paper, we introduce a comprehensive cross-modal feature matching benchmark, CM-Bench, which encompasses 30 feature matching algorithms across diverse cross-modal datasets. Specifically, state-of-the-art traditional and deep learning-based methods are first summarized and categorized into sparse, semidense, and dense methods. These methods are evaluated by different tasks including homography estimation, relative pose estimation, and feature-matching-based geo-localization. In addition, we introduce a classification-network-based adaptive preprocessing front-end that automatically selects suitable enhancement strategies before matching. We also present a novel infrared-satellite cross-modal dataset with manually annotated ground-truth correspondences for practical geo-localization evaluation. The dataset and resource will be available at: https://github.com/SLZ98/CM-Bench.
Problem

Research questions and friction points this paper is trying to address.

cross-modal feature matching
infrared-visible images
benchmark
evaluation metrics
geo-localization
Innovation

Methods, ideas, or system contributions that make the work stand out.

cross-modal feature matching
infrared-visible image matching
benchmark
adaptive preprocessing
geo-localization
🔎 Similar Papers
No similar papers found.
L
Liangzheng Sun
School of Instrument Science and Opto-Electronics Engineering, Beijing Information Science and Technology University, Beijing 100192, China
M
Mengfan He
Department of Precision Instrument, Tsinghua University, Beijing 100084, China
X
Xingyu Shao
Department of Precision Instrument, Tsinghua University, Beijing 100084, China
B
Binbin Li
School of Instrument Science and Opto-Electronics Engineering, Beijing Information Science and Technology University, Beijing 100192, China
Zhiqiang Yan
Zhiqiang Yan
National University of Singapore
3D computer visiondepth perceptionoccupancy prediction
C
Chunyu Li
Department of Precision Instrument, Tsinghua University, Beijing 100084, China
Z
Ziyang Meng
Department of Precision Instrument, Tsinghua University, Beijing 100084, China
Fei Xing
Fei Xing
Assistant Professor, wake forest baptist health
oncology