EarthMind: Towards Multi-Granular and Multi-Sensor Earth Observation with Large Multimodal Models

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large multimodal models (LMMs) exhibit limited capability in comprehending Earth observation (EO) data, hindering fine-grained environmental dynamics monitoring. To address this, we propose a vision-language framework tailored for multi-granularity, multi-sensor remote sensing understanding. Our method introduces a novel spatial attention prompting mechanism to enhance pixel-level perception, designs an adaptive information-density-weighted cross-modal fusion strategy, and integrates heterogeneous modality alignment, dynamic token reweighting, and multi-scale feature encoding. The framework is jointly trained on human-annotated image-text pairs. We further construct EarthMind-Bench—the first thousand-scale, multi-sensor remote sensing image-text benchmark. Evaluated on EarthMind-Bench, our model surpasses GPT-4o with only 1/10 the parameters; it achieves state-of-the-art performance across multiple public remote sensing benchmarks and, for the first time, enables collaborative optimization of multi-granularity and multi-sensor EO tasks within a unified architecture.

Technology Category

Application Category

📝 Abstract
Large Multimodal Models (LMMs) have demonstrated strong performance in various vision-language tasks. However, they often struggle to comprehensively understand Earth Observation (EO) data, which is critical for monitoring the environment and the effects of human activity on it. In this work, we present EarthMind, a novel vision-language framework for multi-granular and multi-sensor EO data understanding. EarthMind features two core components: (1) Spatial Attention Prompting (SAP), which reallocates attention within the LLM to enhance pixel-level understanding; and (2) Cross-modal Fusion, which aligns heterogeneous modalities into a shared space and adaptively reweighs tokens based on their information density for effective fusion. To facilitate multi-sensor fusion evaluation, we propose EarthMind-Bench, a comprehensive benchmark with over 2,000 human-annotated multi-sensor image-question pairs, covering a wide range of perception and reasoning tasks. Extensive experiments demonstrate the effectiveness of EarthMind. It achieves state-of-the-art performance on EarthMind-Bench, surpassing GPT-4o despite being only 4B in scale. Moreover, EarthMind outperforms existing methods on multiple public EO benchmarks, showcasing its potential to handle both multi-granular and multi-sensor challenges in a unified framework.
Problem

Research questions and friction points this paper is trying to address.

Enhancing Earth Observation data understanding with LMMs
Addressing multi-granular and multi-sensor fusion challenges
Improving pixel-level analysis via spatial attention mechanisms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Spatial Attention Prompting enhances pixel-level understanding
Cross-modal Fusion aligns heterogeneous modalities adaptively
EarthMind-Bench benchmark for multi-sensor fusion evaluation
Yan Shu
Yan Shu
University of Trento << Harbin Institute of Technology
Vision and LanguageMulti-modal LearningVideo UnderstandingOCRRemote Sensing
B
Bin Ren
University of Trento, University of Pisa, INSAIT, Sofia University “St. Kliment Ohridski”
Zhitong Xiong
Zhitong Xiong
Technical Univercity of Munich
Deep LearningRemote SensingComputer Vision
D
D. Paudel
INSAIT, Sofia University “St. Kliment Ohridski”
L
L. V. Gool
INSAIT, Sofia University “St. Kliment Ohridski”
B
Begum Demir
Technische Universität Berlin
N
N. Sebe
University of Trento
Paolo Rota
Paolo Rota
Associate Professor @ University of Trento
Computer VisionVideo UnderstandingVision and LanguageMotion Understanding