🤖 AI Summary
This work addresses the challenge of effectively learning spatial-spectral features from multispectral remote sensing imagery, where complex backgrounds, ambiguous targets, and lack of semantic guidance hinder existing Masked Autoencoders. To overcome this, we propose SIGMAE, a novel approach that incorporates domain-specific spectral indices as prior knowledge to design a Semantic Saliency-guided Dynamic Token Masking (SSDTM) strategy. SSDTM adaptively selects and prioritizes the reconstruction of information-rich regions, progressively increasing task difficulty through curriculum learning. Evaluated on five remote sensing datasets, SIGMAE substantially outperforms current geospatial foundation models, enabling high-quality image reconstruction even at mask ratios up to 90% and significantly improving complex target recognition performance under limited labeled data conditions.
📝 Abstract
Pretraining and fine-tuning have emerged as a new paradigm in remote sensing image interpretation. Among them, Masked Autoencoder (MAE)-based pretraining stands out for its strong capability to learn general feature representations via reconstructing masked image regions. However, applying MAE to multispectral remote sensing images remains challenging due to complex backgrounds, indistinct targets, and the lack of semantic guidance during masking, which hinders the learning of underlying structures and meaningful spatial-spectral features. To address this, we propose a simple yet effective approach, Spectral Index-Guided MAE (SIGMAE), for multispectral image pretraining. The core idea is to incorporate domain-specific spectral indices as prior knowledge to guide dynamic token masking toward informative regions. SIGMAE introduces Semantic Saliency-Guided Dynamic Token Masking (SSDTM), a curriculum-style strategy that quantifies each patch's semantic richness and internal heterogeneity to adaptively select the most informative tokens during training. By prioritizing semantically salient regions and progressively increasing sample difficulty, SSDTM enhances spectrally rich and structurally aware representation learning, mitigates overfitting, and reduces redundant computation compared with random masking. Extensive experiments on five widely used datasets covering various downstream tasks, including scene classification, semantic segmentation, object extraction and change detection, demonstrate that SIGMAE outperforms other pretrained geospatial foundation models. Moreover, it exhibits strong spatial-spectral reconstruction capability, even with a 90% mask ratio, and improves complex target recognition under limited labeled data. The source codes and model weights will be released at https://github.com/zxk688/SIGMAE.