🤖 AI Summary
To address two key bottlenecks in remote sensing (RS) image multimodal fusion—(1) fixed-resolution inputs that hinder the trade-off between computational efficiency and fine-grained detail preservation, and (2) single-scale cross-modal alignment lacking hierarchical semantic modeling—this paper proposes a Dynamic Resolution Input Strategy (DRIS) and a Multi-Scale Vision–Language Alignment Mechanism (MS-VLAM). DRIS enables content-aware coarse-to-fine resolution adaptation, while MS-VLAM establishes three-level semantic consistency: object-level, local-level, and global-level alignment. Built upon a vision–language model framework, our approach integrates multi-granularity cross-modal feature alignment with hierarchical semantic embedding. Evaluated on the RS-GPT4V benchmark, our method achieves substantial improvements: +12.3% BLEU-4 and +15.6% CIDEr for image captioning, and +18.9% Recall@10 for cross-modal retrieval—demonstrating simultaneous gains in both accuracy and computational efficiency.
📝 Abstract
Multimodal fusion of remote sensing images serves as a core technology for overcoming the limitations of single-source data and improving the accuracy of surface information extraction, which exhibits significant application value in fields such as environmental monitoring and urban planning. To address the deficiencies of existing methods, including the failure of fixed resolutions to balance efficiency and detail, as well as the lack of semantic hierarchy in single-scale alignment, this study proposes a Vision-language Model (VLM) framework integrated with two key innovations: the Dynamic Resolution Input Strategy (DRIS) and the Multi-scale Vision-language Alignment Mechanism (MS-VLAM).Specifically, the DRIS adopts a coarse-to-fine approach to adaptively allocate computational resources according to the complexity of image content, thereby preserving key fine-grained features while reducing redundant computational overhead. The MS-VLAM constructs a three-tier alignment mechanism covering object, local-region and global levels, which systematically captures cross-modal semantic consistency and alleviates issues of semantic misalignment and granularity imbalance.Experimental results on the RS-GPT4V dataset demonstrate that the proposed framework significantly improves the accuracy of semantic understanding and computational efficiency in tasks including image captioning and cross-modal retrieval. Compared with conventional methods, it achieves superior performance in evaluation metrics such as BLEU-4 and CIDEr for image captioning, as well as R@10 for cross-modal retrieval. This technical framework provides a novel approach for constructing efficient and robust multimodal remote sensing systems, laying a theoretical foundation and offering technical guidance for the engineering application of intelligent remote sensing interpretation.