๐ค AI Summary
Current remote sensing datasets suffer from limited scale and insufficient diversity, while conventional masked image modeling (MIM) mandates reconstruction of all image patchesโleading to weak representation generalization and substantial computational redundancy. To address these limitations, we propose a novel pretraining paradigm for large-scale optical remote sensing foundation models. Specifically, we construct OpticalRS-13M, the first open-source, million-scale optical remote sensing dataset. We further introduce SelectiveMAE, a semantic-aware MIM framework that jointly incorporates semantic-guided masking and dynamic token importance scoring to reconstruct only semantically rich regions while adaptively skipping background pixels. Additionally, we optimize the ViT architecture and distributed training pipeline. Our approach achieves state-of-the-art performance across classification, detection, and segmentation benchmarks, accelerates training by over 2ร, and significantly enhances model generalization and scalability.
๐ Abstract
Masked Image Modeling (MIM) has become an essential method for building foundational visual models in remote sensing (RS). However, the limitations in size and diversity of existing RS datasets restrict the ability of MIM methods to learn generalizable representations. Additionally, conventional MIM techniques, which require reconstructing all tokens, introduce unnecessary computational overhead. To address these issues, we present a new pre-training pipeline for RS models, featuring the creation of a large-scale RS dataset and an efficient MIM approach. We curated a high-quality dataset named OpticalRS-13M by collecting publicly available RS datasets and processing them through exclusion, slicing, and deduplication. OpticalRS-13M comprises 13 million optical images covering various RS tasks, such as object detection and pixel segmentation. To enhance efficiency, we propose SelectiveMAE, a pre-training method that dynamically encodes and reconstructs semantically rich patch tokens, thereby reducing the inefficiencies of traditional MIM models caused by redundant background pixels in RS images. Extensive experiments demonstrate that OpticalRS-13M significantly improves classification, detection, and segmentation performance, while SelectiveMAE increases training efficiency over 2 times. This highlights the effectiveness and scalability of our pipeline in developing RS foundational models.