MapGlue: Multimodal Remote Sensing Image Matching

📅 2025-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal remote sensing image (MRSI) matching remains highly challenging due to geometric, radiometric, and viewpoint disparities, while the scarcity of large-scale, diverse, real-world paired datasets hinders deep learning advancement. To address this, we introduce MapData—the first globally representative, large-scale paired dataset of electronic maps and visible-light imagery—and propose MapGlue, a semantic-driven dual-graph-guided matching framework. MapGlue employs dual graph neural networks to enable global–local cross-modal interaction and explicitly decouples semantic-consistent features. It further integrates hybrid human–machine ground-truth annotation with multimodal alignment strategies. Extensive experiments demonstrate that MapGlue significantly outperforms state-of-the-art methods on MapData and five public benchmarks. Notably, it achieves zero-shot transfer to unseen modalities—including infrared and SAR—validating its strong generalization capability and cross-modal invariant representation learning.

Technology Category

Application Category

📝 Abstract
Multimodal remote sensing image (MRSI) matching is pivotal for cross-modal fusion, localization, and object detection, but it faces severe challenges due to geometric, radiometric, and viewpoint discrepancies across imaging modalities. Existing unimodal datasets lack scale and diversity, limiting deep learning solutions. This paper proposes MapGlue, a universal MRSI matching framework, and MapData, a large-scale multimodal dataset addressing these gaps. Our contributions are twofold. MapData, a globally diverse dataset spanning 233 sampling points, offers original images (7,000x5,000 to 20,000x15,000 pixels). After rigorous cleaning, it provides 121,781 aligned electronic map-visible image pairs (512x512 pixels) with hybrid manual-automated ground truth, addressing the scarcity of scalable multimodal benchmarks. MapGlue integrates semantic context with a dual graph-guided mechanism to extract cross-modal invariant features. This structure enables global-to-local interaction, enhancing descriptor robustness against modality-specific distortions. Extensive evaluations on MapData and five public datasets demonstrate MapGlue's superiority in matching accuracy under complex conditions, outperforming state-of-the-art methods. Notably, MapGlue generalizes effectively to unseen modalities without retraining, highlighting its adaptability. This work addresses longstanding challenges in MRSI matching by combining scalable dataset construction with a robust, semantics-driven framework. Furthermore, MapGlue shows strong generalization capabilities on other modality matching tasks for which it was not specifically trained. The dataset and code are available at https://github.com/PeihaoWu/MapGlue.
Problem

Research questions and friction points this paper is trying to address.

Addresses challenges in multimodal remote sensing image matching.
Proposes a universal framework and scalable dataset for MRSI matching.
Enhances descriptor robustness against modality-specific distortions.
Innovation

Methods, ideas, or system contributions that make the work stand out.

MapGlue integrates semantic context with dual graph-guided mechanism.
MapData provides 121,781 aligned multimodal image pairs globally.
MapGlue generalizes effectively to unseen modalities without retraining.
🔎 Similar Papers
No similar papers found.
P
Peihao Wu
School of Remote Sensing Information Engineering, Wuhan University, Wuhan 430079, China
Y
Yongxiang Yao
School of Remote Sensing Information Engineering, Wuhan University, Wuhan 430079, China
W
Wenfei Zhang
School of Remote Sensing Information Engineering, Wuhan University, Wuhan 430079, China
D
Dong Wei
School of Remote Sensing Information Engineering, Wuhan University, Wuhan 430079, China
Yi Wan
Yi Wan
Pokee AI
reinforcement learning
Yansheng Li
Yansheng Li
Professor, Wuhan University
Deep LearningKnowledge GraphRemote Sensing Big Data Mining
Y
Yongjun Zhang
School of Remote Sensing Information Engineering, Wuhan University, Wuhan 430079, China