Survey of Multimodal Geospatial Foundation Models: Techniques, Applications, and Challenges

📅 2025-10-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal geospatial foundation models (GFMs) face core challenges in remote sensing—including modality heterogeneity, distribution shift, and semantic gap—hindering robust cross-domain generalization and interpretability. Method: This paper presents the first systematic, modality-driven survey of their technical evolution, proposing cross-modal interaction design principles grounded in remote sensing physics. Our approach integrates multi-resolution and multi-temporal modeling, contrastive learning, cross-modal attention, and task-adaptive fine-tuning to establish a unified alignment–transfer–generalization pipeline. Contribution/Results: We introduce the first taxonomy specifically for multimodal GFMs in remote sensing, release several novel benchmark datasets, and comprehensively evaluate dozens of models across ten downstream tasks—including land cover mapping, agricultural monitoring, and disaster response. Experiments demonstrate substantial improvements in cross-domain generalization and semantic interpretability, advancing the practical deployment of multimodal GFMs in geospatial AI.

Technology Category

Application Category

📝 Abstract
Foundation models have transformed natural language processing and computer vision, and their impact is now reshaping remote sensing image analysis. With powerful generalization and transfer learning capabilities, they align naturally with the multimodal, multi-resolution, and multi-temporal characteristics of remote sensing data. To address unique challenges in the field, multimodal geospatial foundation models (GFMs) have emerged as a dedicated research frontier. This survey delivers a comprehensive review of multimodal GFMs from a modality-driven perspective, covering five core visual and vision-language modalities. We examine how differences in imaging physics and data representation shape interaction design, and we analyze key techniques for alignment, integration, and knowledge transfer to tackle modality heterogeneity, distribution shifts, and semantic gaps. Advances in training paradigms, architectures, and task-specific adaptation strategies are systematically assessed alongside a wealth of emerging benchmarks. Representative multimodal visual and vision-language GFMs are evaluated across ten downstream tasks, with insights into their architectures, performance, and application scenarios. Real-world case studies, spanning land cover mapping, agricultural monitoring, disaster response, climate studies, and geospatial intelligence, demonstrate the practical potential of GFMs. Finally, we outline pressing challenges in domain generalization, interpretability, efficiency, and privacy, and chart promising avenues for future research.
Problem

Research questions and friction points this paper is trying to address.

Surveying multimodal geospatial foundation models' techniques, applications, and challenges
Addressing modality heterogeneity and semantic gaps in remote sensing data
Evaluating GFMs' performance across diverse real-world geospatial applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal geospatial foundation models address remote sensing challenges
Techniques align and integrate heterogeneous modalities for knowledge transfer
Training paradigms adapt architectures to diverse downstream applications
🔎 Similar Papers
No similar papers found.