🤖 AI Summary
Traditional vision-centric paradigms face inherent limitations in multimodal reasoning, semantic abstraction, and interactive decision-making, while existing LLM-integration studies lack a unifying cognitive theoretical foundation. Method: This paper proposes a language-centered paradigm for intelligent remote sensing image interpretation, introducing the Global Workspace Theory (GWT) to the remote sensing domain for the first time. It establishes a unified framework with a large language model (LLM) as the cognitive core, seamlessly integrating perception, task, knowledge, and action spaces through multimodal representation learning, knowledge association modeling, trustworthy reasoning mechanisms, and autonomous interaction design. Contribution/Results: The framework enables a paradigm shift from “object recognition from imagery” to “knowledge orchestration via language,” systematically articulates key technical challenges, and provides an interpretable, scalable theoretical and methodological foundation for geospatial cognitive intelligence.
📝 Abstract
The mainstream paradigm of remote sensing image interpretation has long been dominated by vision-centered models, which rely on visual features for semantic understanding. However, these models face inherent limitations in handling multi-modal reasoning, semantic abstraction, and interactive decision-making. While recent advances have introduced Large Language Models (LLMs) into remote sensing workflows, existing studies primarily focus on downstream applications, lacking a unified theoretical framework that explains the cognitive role of language. This review advocates a paradigm shift from vision-centered to language-centered remote sensing interpretation. Drawing inspiration from the Global Workspace Theory (GWT) of human cognition, We propose a language-centered framework for remote sensing interpretation that treats LLMs as the cognitive central hub integrating perceptual, task, knowledge and action spaces to enable unified understanding, reasoning, and decision-making. We first explore the potential of LLMs as the central cognitive component in remote sensing interpretation, and then summarize core technical challenges, including unified multimodal representation, knowledge association, and reasoning and decision-making. Furthermore, we construct a global workspace-driven interpretation mechanism and review how language-centered solutions address each challenge. Finally, we outline future research directions from four perspectives: adaptive alignment of multimodal data, task understanding under dynamic knowledge constraints, trustworthy reasoning, and autonomous interaction. This work aims to provide a conceptual foundation for the next generation of remote sensing interpretation systems and establish a roadmap toward cognition-driven intelligent geospatial analysis.