🤖 AI Summary
To address the scarcity of labeled data for hyperspectral image (HSI) analysis, which severely limits Transformer-based model performance, this paper proposes Spatial-Frequency Masked Imaging Modeling (SFMIM), a self-supervised pretraining framework. SFMIM is the first method to explicitly couple spatial locality and spectral frequency-domain structure in HSI pretraining: it applies random masking to non-overlapping spectral patches in the spatial domain, while simultaneously masking spectral Fourier coefficients in the frequency domain via FFT, followed by inverse FFT reconstruction—enabling joint spatial-spectral modeling. Built upon a Transformer encoder, SFMIM performs end-to-end optimization of dual-domain reconstruction objectives. Evaluated on three public HSI classification benchmarks, it achieves state-of-the-art (SOTA) performance. Moreover, fine-tuning converges significantly faster and attains high accuracy with only a small number of labeled samples, demonstrating strong transferability and data efficiency.
📝 Abstract
Hyperspectral images (HSIs) capture rich spectral signatures that reveal vital material properties, offering broad applicability across various domains. However, the scarcity of labeled HSI data limits the full potential of deep learning, especially for transformer-based architectures that require large-scale training. To address this constraint, we propose Spatial-Frequency Masked Image Modeling (SFMIM), a self-supervised pretraining strategy for hyperspectral data that utilizes the large portion of unlabeled data. Our method introduces a novel dual-domain masking mechanism that operates in both spatial and frequency domains. The input HSI cube is initially divided into non-overlapping patches along the spatial dimension, with each patch comprising the entire spectrum of its corresponding spatial location. In spatial masking, we randomly mask selected patches and train the model to reconstruct the masked inputs using the visible patches. Concurrently, in frequency masking, we remove portions of the frequency components of the input spectra and predict the missing frequencies. By learning to reconstruct these masked components, the transformer-based encoder captures higher-order spectral-spatial correlations. We evaluate our approach on three publicly available HSI classification benchmarks and demonstrate that it achieves state-of-the-art performance. Notably, our model shows rapid convergence during fine-tuning, highlighting the efficiency of our pretraining strategy.