🤖 AI Summary
To address the challenges of fusing heterogeneous geospatial data—such as satellite imagery, geographic metadata, and textual descriptions—and poor cross-task generalization in GeoAI, this paper introduces GeoMLLM, the first multimodal large language model specifically designed for geospatial intelligence. Methodologically, it proposes a spatially aware adapter and a geography-knowledge-enhanced LLM backbone, integrated with a ViT-based vision encoder and multi-granularity spatial instruction tuning to achieve cross-modal spatial alignment and zero-shot generalization across geographic tasks. Experiments demonstrate that GeoMLLM consistently outperforms both domain-specific models and general-purpose multimodal LLMs on remote sensing understanding, health-related geospatial question answering, and urban perception tasks. Under zero-shot settings, it achieves an average accuracy improvement of 12.7%, significantly surpassing the limitations of conventional single-task models.
📝 Abstract
The rapid advancement of multimodal large language models (LLMs) has opened new frontiers in artificial intelligence, enabling the integration of diverse large-scale data types such as text, images, and spatial information. In this paper, we explore the potential of multimodal LLMs (MLLM) for geospatial artificial intelligence (GeoAI), a field that leverages spatial data to address challenges in domains including Geospatial Semantics, Health Geography, Urban Geography, Urban Perception, and Remote Sensing. We propose a MLLM (OmniGeo) tailored to geospatial applications, capable of processing and analyzing heterogeneous data sources, including satellite imagery, geospatial metadata, and textual descriptions. By combining the strengths of natural language understanding and spatial reasoning, our model enhances the ability of instruction following and the accuracy of GeoAI systems. Results demonstrate that our model outperforms task-specific models and existing LLMs on diverse geospatial tasks, effectively addressing the multimodality nature while achieving competitive results on the zero-shot geospatial tasks. Our code will be released after publication.