🤖 AI Summary
To address severe domain shift between remote sensing (RS) imagery and natural images, and the difficulty of modeling multi-modal RS semantics (e.g., SAR, infrared) within CLIP’s pre-trained space, this paper proposes a two-stage zero-shot adaptation framework. First, robust fine-tuning mitigates domain shift without modifying CLIP’s backbone. Second, a lightweight cross-modal alignment module losslessly maps features from heterogeneous RS encoders into CLIP’s joint visual–textual semantic space. The method requires no text annotations, introduces no new parameters, avoids catastrophic forgetting, and preserves CLIP’s original capabilities. Evaluated on multiple RS benchmarks, it achieves state-of-the-art zero-shot performance in image classification and cross-modal retrieval—significantly outperforming both vanilla CLIP and existing domain-specific models. To our knowledge, this is the first work to enable unified, scalable alignment of multi-modal RS data with CLIP’s semantic space.
📝 Abstract
Deep Learning (DL) is undergoing a paradigm shift with the emergence of foundation models, aptly named by their crucial, yet incomplete nature. In this work, we focus on Contrastive Language-Image Pre-training (CLIP), an open-vocabulary foundation model, which achieves high accuracy across many image classification tasks and is often competitive with a fully supervised baseline without being explicitly trained. Nevertheless, there are still domains where zero-shot CLIP performance is far from optimal, such as Remote Sensing (RS) and medical imagery. These domains do not only exhibit fundamentally different distributions compared to natural images, but also commonly rely on complementary modalities, beyond RGB, to derive meaningful insights. To this end, we propose a methodology for the purpose of aligning distinct RS imagery modalities with the visual and textual modalities of CLIP. Our two-stage procedure, comprises of robust fine-tuning CLIP in order to deal with the distribution shift, accompanied by the cross-modal alignment of a RS modality encoder, in an effort to extend the zero-shot capabilities of CLIP. We ultimately demonstrate our method on the tasks of RS imagery classification and cross-modal retrieval. We empirically show that both robust fine-tuning and cross-modal alignment translate to significant performance gains, across several RS benchmark datasets. Notably, these enhancements are achieved without the reliance on textual descriptions, without introducing any task-specific parameters, without training from scratch and without catastrophic forgetting.