🤖 AI Summary
Existing single-modal remote sensing change detection (RSCD) methods suffer from limited feature representation, coarse-grained change pattern modeling, and poor robustness to illumination variations and noise. To address these bottlenecks, this paper proposes MMChange, a novel multimodal change detection framework that pioneers the integration of textual modality into RSCD. Specifically, it leverages vision-language models to extract semantic image descriptions, introduces a text-difference enhancement module for fine-grained semantic change characterization, and establishes an image-text cross-modal fusion mechanism to enable complementary representation learning. The framework comprises four core components: image feature refinement, text semantic modeling, difference enhancement, and multimodal fusion. Extensive experiments on three benchmark datasets—LEVIR-CD, WHU-CD, and SYSU-CD—demonstrate that MMChange consistently outperforms state-of-the-art methods, achieving significant improvements in mF1 and IoU scores. These results validate the substantial gains in detection accuracy and environmental robustness enabled by multimodal synergy.
📝 Abstract
Although deep learning has advanced remote sensing change detection (RSCD), most methods rely solely on image modality, limiting feature representation, change pattern modeling, and generalization especially under illumination and noise disturbances. To address this, we propose MMChange, a multimodal RSCD method that combines image and text modalities to enhance accuracy and robustness. An Image Feature Refinement (IFR) module is introduced to highlight key regions and suppress environmental noise. To overcome the semantic limitations of image features, we employ a vision language model (VLM) to generate semantic descriptions of bitemporal images. A Textual Difference Enhancement (TDE) module then captures fine grained semantic shifts, guiding the model toward meaningful changes. To bridge the heterogeneity between modalities, we design an Image Text Feature Fusion (ITFF) module that enables deep cross modal integration. Extensive experiments on LEVIRCD, WHUCD, and SYSUCD demonstrate that MMChange consistently surpasses state of the art methods across multiple metrics, validating its effectiveness for multimodal RSCD. Code is available at: https://github.com/yikuizhai/MMChange.