Scaling Efficient Masked Image Modeling on Large Remote Sensing Dataset

๐Ÿ“… 2024-06-17
๐Ÿ“ˆ Citations: 3
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Current remote sensing datasets suffer from limited scale and insufficient diversity, while conventional masked image modeling (MIM) mandates reconstruction of all image patchesโ€”leading to weak representation generalization and substantial computational redundancy. To address these limitations, we propose a novel pretraining paradigm for large-scale optical remote sensing foundation models. Specifically, we construct OpticalRS-13M, the first open-source, million-scale optical remote sensing dataset. We further introduce SelectiveMAE, a semantic-aware MIM framework that jointly incorporates semantic-guided masking and dynamic token importance scoring to reconstruct only semantically rich regions while adaptively skipping background pixels. Additionally, we optimize the ViT architecture and distributed training pipeline. Our approach achieves state-of-the-art performance across classification, detection, and segmentation benchmarks, accelerates training by over 2ร—, and significantly enhances model generalization and scalability.

Technology Category

Application Category

๐Ÿ“ Abstract
Masked Image Modeling (MIM) has become an essential method for building foundational visual models in remote sensing (RS). However, the limitations in size and diversity of existing RS datasets restrict the ability of MIM methods to learn generalizable representations. Additionally, conventional MIM techniques, which require reconstructing all tokens, introduce unnecessary computational overhead. To address these issues, we present a new pre-training pipeline for RS models, featuring the creation of a large-scale RS dataset and an efficient MIM approach. We curated a high-quality dataset named OpticalRS-13M by collecting publicly available RS datasets and processing them through exclusion, slicing, and deduplication. OpticalRS-13M comprises 13 million optical images covering various RS tasks, such as object detection and pixel segmentation. To enhance efficiency, we propose SelectiveMAE, a pre-training method that dynamically encodes and reconstructs semantically rich patch tokens, thereby reducing the inefficiencies of traditional MIM models caused by redundant background pixels in RS images. Extensive experiments demonstrate that OpticalRS-13M significantly improves classification, detection, and segmentation performance, while SelectiveMAE increases training efficiency over 2 times. This highlights the effectiveness and scalability of our pipeline in developing RS foundational models.
Problem

Research questions and friction points this paper is trying to address.

Limited size and diversity of RS datasets restrict MIM generalization
Conventional MIM techniques incur unnecessary computational overhead
Redundant background pixels in RS images reduce MIM efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large-scale RS dataset OpticalRS-13M creation
Efficient MIM method SelectiveMAE introduction
Dynamic encoding of rich patch tokens
๐Ÿ”Ž Similar Papers
No similar papers found.
Fengxiang Wang
Fengxiang Wang
National University of Defense Technology
Computer VisionRemote Sensing
H
Hongzhen Wang
Tsinghua University, China
D
Di Wang
Wuhan University, China
Zonghao Guo
Zonghao Guo
University of Chinese Academy of Sciences
Zhenyu Zhong
Zhenyu Zhong
Ant Group
security
L
Long Lan
National University of Defense Technology, China
J
Jing Zhang
The University of Sydney, Australia
Z
Zhiyuan Liu
Tsinghua University, China
Maosong Sun
Maosong Sun
Professor of Computer Science and Technology, Tsinghua University
Natural Language ProcessingArtificial IntelligenceSocial Computing