Efficient Few-Shot Learning in Remote Sensing: Fusing Vision and Vision-Language Models

📅 2025-10-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of scarce domain-specific annotations, weak contextual understanding, and performance degradation due to image degradation in remote sensing image analysis, this paper proposes a few-shot analytical framework that synergistically integrates vision models with vision-language models (VLMs). It is the first work to systematically combine YOLO-based detectors with multimodal large models—including LLaVA, CLIP, ChatGPT, and Gemini—leveraging cross-modal alignment and semantic-guided inference to enhance contextual awareness for small-object detection and scene understanding. Evaluated on aircraft detection and counting, the method reduces mean absolute error (MAE) by 48.46% and improves CLIPScore by 6.17%, significantly boosting localization accuracy and semantic parsing robustness under low-quality imaging conditions. This work establishes a novel, generalizable paradigm for intelligent remote sensing interpretation—requiring minimal annotation effort while enabling strong contextual modeling.

Technology Category

Application Category

📝 Abstract
Remote sensing has become a vital tool across sectors such as urban planning, environmental monitoring, and disaster response. While the volume of data generated has increased significantly, traditional vision models are often constrained by the requirement for extensive domain-specific labelled data and their limited ability to understand the context within complex environments. Vision Language Models offer a complementary approach by integrating visual and textual data; however, their application to remote sensing remains underexplored, particularly given their generalist nature. This work investigates the combination of vision models and VLMs to enhance image analysis in remote sensing, with a focus on aircraft detection and scene understanding. The integration of YOLO with VLMs such as LLaVA, ChatGPT, and Gemini aims to achieve more accurate and contextually aware image interpretation. Performance is evaluated on both labelled and unlabelled remote sensing data, as well as degraded image scenarios which are crucial for remote sensing. The findings show an average MAE improvement of 48.46% across models in the accuracy of aircraft detection and counting, especially in challenging conditions, in both raw and degraded scenarios. A 6.17% improvement in CLIPScore for comprehensive understanding of remote sensing images is obtained. The proposed approach combining traditional vision models and VLMs paves the way for more advanced and efficient remote sensing image analysis, especially in few-shot learning scenarios.
Problem

Research questions and friction points this paper is trying to address.

Enhancing aircraft detection accuracy in remote sensing
Improving scene understanding with vision-language model fusion
Addressing limited labeled data through few-shot learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fusing vision models with vision-language models
Integrating YOLO with LLaVA, ChatGPT, Gemini
Improving aircraft detection in few-shot scenarios
🔎 Similar Papers
No similar papers found.