🤖 AI Summary
To address challenges in remote sensing vision-language retrieval—including difficult cross-modal alignment, high model resource consumption, and insufficient exploitation of negative samples—this paper proposes CMER, a computation- and memory-efficient framework. Methodologically: (1) a Focus-Adapter auxiliary branch is designed to explicitly suppress background interference in remote sensing images; (2) scene-label-guided semantic priors are introduced to constrain the visual–linguistic matching space; and (3) a cyclic negative-sample reuse mechanism is proposed to decouple negative sample pool size from batch size. CMER integrates adapter-based fine-tuning, auxiliary branching, semantic-aware data augmentation, and dynamic negative-sample management. On the RSITMD benchmark, CMER achieves 2–5% higher retrieval accuracy, reduces training memory usage by 49%, and improves throughput by 1.4×, significantly outperforming state-of-the-art methods.
📝 Abstract
Remote sensing text--image retrieval (RSTIR) aims to retrieve the matched remote sensing (RS) images from the database according to the descriptive text. Recently, the rapid development of large visual-language pre-training models provides new insights for RSTIR. Nevertheless, as the complexity of models grows in RSTIR, the previous studies suffer from suboptimal resource efficiency during transfer learning. To address this issue, we propose a computation and memory-efficient retrieval (CMER) framework for RSTIR. To reduce the training memory consumption, we propose the Focus-Adapter module, which adopts a side branch structure. Its focus layer suppresses the interference of background pixels for small targets. Simultaneously, to enhance data efficacy, we regard the RS scene category as the metadata and design a concise augmentation technique. The scene label augmentation leverages the prior knowledge from land cover categories and shrinks the search space. We propose the negative sample recycling strategy to make the negative sample pool decoupled from the mini-batch size. It improves the generalization performance without introducing additional encoders. We have conducted quantitative and qualitative experiments on public datasets and expanded the benchmark with some advanced approaches, which demonstrates the competitiveness of the proposed CMER. Compared with the recent advanced methods, the overall retrieval performance of CMER is 2%--5% higher on RSITMD. Moreover, our proposed method reduces memory consumption by 49% and has a 1.4x data throughput during training. The code of the CMER and the dataset will be released at https://github.com/ZhangWeihang99/CMER.