🤖 AI Summary
Current large multimodal models (LMMs) exhibit limited capability in comprehending Earth observation (EO) data, hindering fine-grained environmental dynamics monitoring. To address this, we propose a vision-language framework tailored for multi-granularity, multi-sensor remote sensing understanding. Our method introduces a novel spatial attention prompting mechanism to enhance pixel-level perception, designs an adaptive information-density-weighted cross-modal fusion strategy, and integrates heterogeneous modality alignment, dynamic token reweighting, and multi-scale feature encoding. The framework is jointly trained on human-annotated image-text pairs. We further construct EarthMind-Bench—the first thousand-scale, multi-sensor remote sensing image-text benchmark. Evaluated on EarthMind-Bench, our model surpasses GPT-4o with only 1/10 the parameters; it achieves state-of-the-art performance across multiple public remote sensing benchmarks and, for the first time, enables collaborative optimization of multi-granularity and multi-sensor EO tasks within a unified architecture.
📝 Abstract
Large Multimodal Models (LMMs) have demonstrated strong performance in various vision-language tasks. However, they often struggle to comprehensively understand Earth Observation (EO) data, which is critical for monitoring the environment and the effects of human activity on it. In this work, we present EarthMind, a novel vision-language framework for multi-granular and multi-sensor EO data understanding. EarthMind features two core components: (1) Spatial Attention Prompting (SAP), which reallocates attention within the LLM to enhance pixel-level understanding; and (2) Cross-modal Fusion, which aligns heterogeneous modalities into a shared space and adaptively reweighs tokens based on their information density for effective fusion. To facilitate multi-sensor fusion evaluation, we propose EarthMind-Bench, a comprehensive benchmark with over 2,000 human-annotated multi-sensor image-question pairs, covering a wide range of perception and reasoning tasks. Extensive experiments demonstrate the effectiveness of EarthMind. It achieves state-of-the-art performance on EarthMind-Bench, surpassing GPT-4o despite being only 4B in scale. Moreover, EarthMind outperforms existing methods on multiple public EO benchmarks, showcasing its potential to handle both multi-granular and multi-sensor challenges in a unified framework.