🤖 AI Summary
Current multimodal large language models (MLLMs) struggle with multi-step reasoning and domain-specific tool integration in Earth observation tasks, and lack a systematic evaluation framework tailored for remote sensing agents. Method: We propose Earth-Agent—the first multimodal agent framework that jointly processes RGB and spectral remote sensing data, leveraging the Model-Controller-Protocol (MCP) tool ecosystem to enable cross-modal, quantitative spatiotemporal reasoning and geophysical parameter retrieval. Contribution/Results: We introduce Earth-Bench, a dedicated benchmark with a two-tiered evaluation protocol, addressing the longstanding gap in systematic assessment of remote sensing agents. Experiments demonstrate that Earth-Agent consistently outperforms state-of-the-art MLLMs across diverse LLM backbones and agent architectures, achieving the first paradigm shift in remote sensing analysis—from shallow perception to scientific, deep reasoning.
📝 Abstract
Earth observation (EO) is essential for understanding the evolving states of the Earth system. Although recent MLLMs have advanced EO research, they still lack the capability to tackle complex tasks that require multi-step reasoning and the use of domain-specific tools. Agent-based methods offer a promising direction, but current attempts remain in their infancy, confined to RGB perception, shallow reasoning, and lacking systematic evaluation protocols. To overcome these limitations, we introduce Earth-Agent, the first agentic framework that unifies RGB and spectral EO data within an MCP-based tool ecosystem, enabling cross-modal, multi-step, and quantitative spatiotemporal reasoning beyond pretrained MLLMs. Earth-Agent supports complex scientific tasks such as geophysical parameter retrieval and quantitative spatiotemporal analysis by dynamically invoking expert tools and models across modalities. To support comprehensive evaluation, we further propose Earth-Bench, a benchmark of 248 expert-curated tasks with 13,729 images, spanning spectrum, products and RGB modalities, and equipped with a dual-level evaluation protocol that assesses both reasoning trajectories and final outcomes. We conduct comprehensive experiments varying different LLM backbones, comparisons with general agent frameworks, and comparisons with MLLMs on remote sensing benchmarks, demonstrating both the effectiveness and potential of Earth-Agent. Earth-Agent establishes a new paradigm for EO analysis, moving the field toward scientifically grounded, next-generation applications of LLMs in Earth observation. Our code and dataset will be publicly released.