đ¤ AI Summary
To address the scarcity of labeled data in remote sensingâlimiting fully supervised approachesâthis paper proposes WaveMAE, a wavelet-decomposition-based masked autoencoder for self-supervised pretraining. Methodologically, WaveMAE introduces multi-level discrete wavelet transform (DWT) to achieve multi-scale frequency-domain disentanglement and designs spherical-harmonic-driven geographic-position encoding (GPE) to jointly model semantic features and geospatial structural priors, explicitly learning scale-sensitive high-frequency representations. Evaluated on the PANGAEA benchmark, WaveMAE significantly outperforms existing self-supervised learning (SSL) methods, achieving state-of-the-art performance across downstream segmentation and regression tasks. Its lightweight variantâcontaining only 26.4% of the full modelâs parametersâmatches the originalâs accuracy, demonstrating the effectiveness and generalizability of integrating frequency-domain modeling with geographic inductive biases.
đ Abstract
Self-supervised learning (SSL) has recently emerged as a key strategy for building foundation models in remote sensing, where the scarcity of annotated data limits the applicability of fully supervised approaches. In this work, we introduce WaveMAE, a masked autoencoding framework tailored for multispectral satellite imagery. Unlike conventional pixel-based reconstruction, WaveMAE leverages a multi-level Discrete Wavelet Transform (DWT) to disentangle frequency components and guide the encoder toward learning scale-aware high-frequency representations. We further propose a Geo-conditioned Positional Encoding (GPE), which incorporates geographical priors via Spherical Harmonics, encouraging embeddings that respect both semantic and geospatial structure. To ensure fairness in evaluation, all methods are pretrained on the same dataset (fMoW-S2) and systematically evaluated on the diverse downstream tasks of the PANGAEA benchmark, spanning semantic segmentation, regression, change detection, and multilabel classification. Extensive experiments demonstrate that WaveMAE achieves consistent improvements over prior state-of-the-art approaches, with substantial gains on segmentation and regression benchmarks. The effectiveness of WaveMAE pretraining is further demonstrated by showing that even a lightweight variant, containing only 26.4% of the parameters, achieves state-of-the-art performance. Our results establish WaveMAE as a strong and geographically informed foundation model for multispectral remote sensing imagery.