π€ AI Summary
Direct transfer of foundational vision-language models (e.g., Grounding DINO, OFA) to remote sensing visual grounding tasks suffers from significant performance degradation due to domain shift. Method: We propose a lightweight cross-domain adaptation framework. For the first time, we systematically investigate LoRAβs effectiveness across all modules of Grounding DINO; for OFA, we synergistically integrate BitFit and Adapter for parameter-efficient fine-tuning. Contribution/Results: Our method fine-tunes fewer than 10% of model parameters, reducing training cost by over 90% and substantially accelerating inference. It achieves state-of-the-art or competitive performance on multiple remote sensing visual grounding benchmarks. This work delivers a practical, low-overhead, high-performance, and deployment-friendly solution for multimodal remote sensing understanding.
π Abstract
Foundation models have revolutionized artificial intelligence (AI), offering remarkable capabilities across multi-modal domains. Their ability to precisely locate objects in complex aerial and satellite images, using rich contextual information and detailed object descriptions, is essential for remote sensing (RS). These models can associate textual descriptions with object positions through the Visual Grounding (VG) task, but due to domain-specific challenges, their direct application to RS produces sub-optimal results. To address this, we applied Parameter Efficient Fine Tuning (PEFT) techniques to adapt these models for RS-specific VG tasks. Specifically, we evaluated LoRA placement across different modules in Grounding DINO and used BitFit and adapters to fine-tune the OFA foundation model pre-trained on general-purpose VG datasets. This approach achieved performance comparable to or surpassing current State Of The Art (SOTA) models while significantly reducing computational costs. This study highlights the potential of PEFT techniques to advance efficient and precise multi-modal analysis in RS, offering a practical and cost-effective alternative to full model training.