🤖 AI Summary
Current multimodal large language models (MLLMs) exhibit limited capability in remote sensing, performing only basic captioning and instruction following, and failing to handle complex tasks requiring domain-specific tools and expert knowledge. To address this, we propose RS-Agent, a remote sensing–oriented intelligent agent centered on a large language model (LLM) controller. RS-Agent integrates a dynamic toolbox, task-specific solution spaces, and a structured domain knowledge space. It introduces Task-Aware Retrieval to enhance tool selection accuracy and employs DualRAG—a dual-path, weighted retrieval-augmented generation framework—to improve domain knowledge relevance. The architecture supports multi-LLM compatibility and dynamic tool extensibility. Extensive evaluation across nine remote sensing datasets and 18 diverse tasks demonstrates that RS-Agent achieves over 95% task planning accuracy—significantly surpassing state-of-the-art MLLMs—and delivers superior performance in scene classification, object counting, and remote sensing visual question answering.
📝 Abstract
The unprecedented advancements in Multimodal Large Language Models (MLLMs) have demonstrated strong potential in interacting with humans through both language and visual inputs to perform downstream tasks such as visual question answering and scene understanding. However, these models are constrained to basic instruction-following or descriptive tasks, facing challenges in complex real-world remote sensing applications that require specialized tools and knowledge. To address these limitations, we propose RS-Agent, an AI agent designed to interact with human users and autonomously leverage specialized models to address the demands of real-world remote sensing applications. RS-Agent integrates four key components: a Central Controller based on large language models, a dynamic toolkit for tool execution, a Solution Space for task-specific expert guidance, and a Knowledge Space for domain-level reasoning, enabling it to interpret user queries and orchestrate tools for accurate remote sensing task. We introduce two novel mechanisms: Task-Aware Retrieval, which improves tool selection accuracy through expert-guided planning, and DualRAG, a retrieval-augmented generation method that enhances knowledge relevance through weighted, dual-path retrieval. RS-Agent supports flexible integration of new tools and is compatible with both open-source and proprietary LLMs. Extensive experiments across 9 datasets and 18 remote sensing tasks demonstrate that RS-Agent significantly outperforms state-of-the-art MLLMs, achieving over 95% task planning accuracy and delivering superior performance in tasks such as scene classification, object counting, and remote sensing visual question answering. Our work presents RS-Agent as a robust and extensible framework for advancing intelligent automation in remote sensing analysis.