š¤ AI Summary
Remote sensing image interpretation has long been constrained by unimodal modeling, limiting effective fusion of complementary modalitiesāsuch as optical, SAR, and multispectral dataāto mitigate ambiguity and uncertainty. To address this, we introduce the first general-purpose multimodal foundation model for remote sensing, with 14.7 billion parameters. Our method features a novel hierarchical Mixture-of-Experts (MoE) architecture, integrating physics-informed self-supervised pretraining, sensor-specific radiometric modeling, dynamic sparse activation, and joint contrastiveāmasked learning across heterogeneous sources. The model natively supports six fundamental tasksāincluding classification, detection, and segmentationāand establishes new state-of-the-art results on 23 benchmarks. Furthermore, it enables dynamic pruning: compressed to 1.0 billion parameters, it retains competitive performance. Deployed in real-world applicationsāincluding emergency response, marine monitoring, and urban planningāthe model demonstrates strong practical utility and scalability.
š Abstract
The rapid advancement of foundation models has revolutionized visual representation learning in a self-supervised manner. However, their application in remote sensing (RS) remains constrained by a fundamental gap: existing models predominantly handle single or limited modalities, overlooking the inherently multi-modal nature of RS observations. Optical, synthetic aperture radar (SAR), and multi-spectral data offer complementary insights that significantly reduce the inherent ambiguity and uncertainty in single-source analysis. To bridge this gap, we introduce RingMoE, a unified multi-modal RS foundation model with 14.7 billion parameters, pre-trained on 400 million multi-modal RS images from nine satellites. RingMoE incorporates three key innovations: (1) A hierarchical Mixture-of-Experts (MoE) architecture comprising modal-specialized, collaborative, and shared experts, effectively modeling intra-modal knowledge while capturing cross-modal dependencies to mitigate conflicts between modal representations; (2) Physics-informed self-supervised learning, explicitly embedding sensor-specific radiometric characteristics into the pre-training objectives; (3) Dynamic expert pruning, enabling adaptive model compression from 14.7B to 1B parameters while maintaining performance, facilitating efficient deployment in Earth observation applications. Evaluated across 23 benchmarks spanning six key RS tasks (i.e., classification, detection, segmentation, tracking, change detection, and depth estimation), RingMoE outperforms existing foundation models and sets new SOTAs, demonstrating remarkable adaptability from single-modal to multi-modal scenarios. Beyond theoretical progress, it has been deployed and trialed in multiple sectors, including emergency response, land management, marine sciences, and urban planning.