🤖 AI Summary
To address cross-modal optimization imbalance in remote sensing image–text retrieval (RSITR), where text modality dominance suppresses visual representation learning during vision–language pretraining (VLP) fine-tuning, this paper proposes a cross-modal asymmetric adapter architecture and a dual-task consistency loss—enabling, for the first time in remote sensing VLP fine-tuning, modality-specific, parameter-efficient optimization with robust cross-modal alignment. The method integrates differential attention, hierarchical attention, parameter-efficient fine-tuning (PEFT), joint dual-task optimization, and exponential moving average–based consistency regularization. Evaluated on RSICD and RSITMD benchmarks, it achieves mean Recall (mR) improvements of 6%–11% over state-of-the-art PEFT methods and surpasses the full-parameter fine-tuning baseline GeoRSCLIP by 1.15%–2.0%, demonstrating superior efficiency and effectiveness in modality-balanced representation learning.
📝 Abstract
Remote Sensing Image-Text Retrieval (RSITR) plays a critical role in geographic information interpretation, disaster monitoring, and urban planning by establishing semantic associations between image and textual descriptions. Existing Parameter-Efficient Fine-Tuning (PEFT) methods for Vision-and-Language Pre-training (VLP) models typically adopt symmetric adapter structures for exploring cross-modal correlations. However, the strong discriminative nature of text modality may dominate the optimization process and inhibits image representation learning. The nonnegligible imbalanced cross-modal optimization remains a bottleneck to enhancing the model performance. To address this issue, this study proposes a Representation Discrepancy Bridging (RDB) method for the RSITR task. On the one hand, a Cross-Modal Asymmetric Adapter (CMAA) is designed to enable modality-specific optimization and improve feature alignment. The CMAA comprises a Visual Enhancement Adapter (VEA) and a Text Semantic Adapter (TSA). VEA mines fine-grained image features by Differential Attention (DA) mechanism, while TSA identifies key textual semantics through Hierarchical Attention (HA) mechanism. On the other hand, this study extends the traditional single-task retrieval framework to a dual-task optimization framework and develops a Dual-Task Consistency Loss (DTCL). The DTCL improves cross-modal alignment robustness through an adaptive weighted combination of cross-modal, classification, and exponential moving average consistency constraints. Experiments on RSICD and RSITMD datasets show that the proposed RDB method achieves a 6%-11% improvement in mR metrics compared to state-of-the-art PEFT methods and a 1.15%-2% improvement over the full fine-tuned GeoRSCLIP model.