🤖 AI Summary
To address the redundancy of multi-backbone architectures, inefficient parameter utilization, and the incompatibility of natural-image self-supervised learning methods with the complex semantic distributions of remote sensing imagery, this paper proposes a unified-architecture multimodal remote sensing foundation model. The model employs a single Transformer backbone integrated with an adaptive image patch merging module, learnable modality-specific prompt tokens, and a Mixture-of-Experts (MoE) mechanism, coupled with a remote sensing–oriented self-supervised pretraining strategy. Extensive experiments across 16 datasets and 7 downstream tasks demonstrate that the model achieves an average performance gain of 1.8 percentage points over the prior state-of-the-art SkySense, significantly improving cross-modal representation capability and generalization. Key contributions include: (i) the first single-backbone architecture enabling joint modeling of multimodal remote sensing data; and (ii) a remote sensing–tailored self-supervised paradigm and an efficient scaling mechanism.
📝 Abstract
The multi-modal remote sensing foundation model (MM-RSFM) has significantly advanced various Earth observation tasks, such as urban planning, environmental monitoring, and natural disaster management. However, most existing approaches generally require the training of separate backbone networks for each data modality, leading to redundancy and inefficient parameter utilization. Moreover, prevalent pre-training methods typically apply self-supervised learning (SSL) techniques from natural images without adequately accommodating the characteristics of remote sensing (RS) images, such as the complicated semantic distribution within a single RS image. In this work, we present SkySense V2, a unified MM-RSFM that employs a single transformer backbone to handle multiple modalities. This backbone is pre-trained with a novel SSL strategy tailored to the distinct traits of RS data. In particular, SkySense V2 incorporates an innovative adaptive patch merging module and learnable modality prompt tokens to address challenges related to varying resolutions and limited feature diversity across modalities. In additional, we incorporate the mixture of experts (MoE) module to further enhance the performance of the foundation model. SkySense V2 demonstrates impressive generalization abilities through an extensive evaluation involving 16 datasets over 7 tasks, outperforming SkySense by an average of 1.8 points.