🤖 AI Summary
Synthetic Aperture Radar (SAR) imagery poses significant challenges for self-supervised pretraining and semantic segmentation due to scarce annotations, strong speckle noise, and extreme intensity values. To address these issues, this work constructs the first nationwide single-channel (HH-polarized) ALOS-2 SAR dataset covering Japan and introduces SAR-specific weighted self-supervised pretraining methods—SAR-W-MixMAE and SAR-W-SimMIM—paired with Vision Transformers. These approaches employ weighted loss functions to mitigate the adverse effects of imbalanced land cover distributions and noise during pretraining and subsequent fine-tuning for semantic segmentation. Experimental results demonstrate that the proposed methods substantially outperform random initialization baselines, confirming the effectiveness and practicality of SAR-tailored self-supervised pretraining for downstream semantic segmentation tasks.
📝 Abstract
Masked auto-encoders (MAE) and related approaches have shown promise for satellite imagery, but their application to synthetic aperture radar (SAR) remains limited due to challenges in semantic labeling and high noise levels. Building on our prior work with SAR-W-MixMAE, which adds SAR-specific intensity-weighted loss to standard MixMAE for pretraining, we also introduce SAR-W-SimMIM; a weighted variant of SimMIM applied to ALOS-2 single-channel SAR imagery. This method aims to reduce the impact of speckle and extreme intensity values during self-supervised pretraining. We evaluate its effect on semantic segmentation compared to our previous trial with SAR-W-MixMAE and random initialization, observing notable improvements. In addition, pretraining and fine-tuning models on satellite imagery pose unique challenges, particularly when developing region-specific models. Imbalanced land cover distributions such as dominant water, forest, or desert areas can introduce bias, affecting both pretraining and downstream tasks like land cover segmentation. To address this, we constructed a SAR dataset using ALOS-2 single-channel (HH polarization) imagery focused on the Japan region, marking the initial phase toward a national-scale foundation model. This dataset was used to pretrain a vision transformer-based autoencoder, with the resulting encoder fine-tuned for semantic segmentation using a task-specific decoder. Initial results demonstrate significant performance improvements compared to training from scratch with random initialization. In summary, this work provides a guide to process and prepare ALOS2 observations to create dataset so that it can be taken advantage of self-supervised pretraining of models and finetuning downstream tasks such as semantic segmentation.