🤖 AI Summary
Large Vision-Language Models (LVLMs) struggle to generalize to remote sensing imagery due to substantial discrepancies in visual appearance, object scale, and semantic granularity—from coarse-grained scenes to fine-grained objects—compared to natural images. To address this, we propose a Semantic-Enhanced Multi-level Alignment framework coupled with a Semantic-Aware Mixture-of-Experts architecture. Our approach integrates a retrieval-based semantic enhancement module, a domain-specific remote sensing semantic knowledge base, multi-level visual feature alignment, a hierarchically structured semantic expert network, and a cross-modal fusion module to enable layered semantic understanding. Evaluated on remote sensing scene classification and visual question answering tasks, our model achieves significant improvements in multi-level semantic consistency, effectively bridging the performance gap between general-purpose LVLMs and domain-specialized remote sensing comprehension capabilities.
📝 Abstract
Large Vision and Language Models (LVLMs) have shown strong performance across various vision-language tasks in natural image domains. However, their application to remote sensing (RS) remains underexplored due to significant domain differences in visual appearances, object scales, and semantics. These discrepancies hider the effective understanding of RS scenes, which contain rich, multi-level semantic information spanning from coarse-to-fine levels. Hence, it limits the direct adaptation of existing LVLMs to RS imagery. To address this gap, we propose a novel LVLM framework tailored for RS understanding, incorporating two core components: Semantic-augmented Multi-level Alignment and Semantic-aware Expert Modeling. First, to align multi-level visual features, we introduce the retrieval-based Semantic Augmentation Module which enriches the visual features with relevant semantics across fine-to-coarse levels (e.g., object- and scene-level information). It is designed to retrieve relevant semantic cues from a RS semantic knowledge database, followed by aggregation of semantic cues with user query and multi-level visual features, resulting in semantically enriched representation across multiple levels. Second, for Semantic-aware Expert Modeling, we design semantic experts, where each expert is responsible for processing semantic representation at different levels separately. This enables hierarchical semantic understanding from coarse to fine levels. Evaluations across multiple RS tasks-including scene classification and VQA, etc.-demonstrate that the proposed framework achieves consistent improvements across multiple semantic levels. This highlights its capability and effectiveness in bridging the gap between general LVLMs and unique demands of RS-specific vision-language understanding.