Selective Masking based Self-Supervised Learning for Image Semantic Segmentation

📅 2025-12-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited performance gains of conventional self-supervised pretraining for semantic segmentation under constrained model capacity and computational resources, this paper proposes Selective Masking Self-supervision (SMS). SMS replaces random masking with a dynamic, error-driven strategy: it identifies hard sample regions during training by leveraging reconstruction errors and iteratively masks and reconstructs high-loss image patches, thereby aligning pretraining more closely with downstream segmentation challenges. A stepwise iterative reconstruction scheme enables improved segmentation accuracy—particularly for low-performing classes—without increasing inference overhead. On general-purpose and weed segmentation benchmarks, SMS achieves absolute mIoU improvements of +2.9% and +2.5%, respectively. These results demonstrate its effectiveness and generalizability for low-budget self-supervised pretraining in resource-constrained scenarios.

Technology Category

Application Category

📝 Abstract
This paper proposes a novel self-supervised learning method for semantic segmentation using selective masking image reconstruction as the pretraining task. Our proposed method replaces the random masking augmentation used in most masked image modelling pretraining methods. The proposed selective masking method selectively masks image patches with the highest reconstruction loss by breaking the image reconstruction pretraining into iterative steps to leverage the trained model's knowledge. We show on two general datasets (Pascal VOC and Cityscapes) and two weed segmentation datasets (Nassar 2020 and Sugarbeets 2016) that our proposed selective masking method outperforms the traditional random masking method and supervised ImageNet pretraining on downstream segmentation accuracy by 2.9% for general datasets and 2.5% for weed segmentation datasets. Furthermore, we found that our selective masking method significantly improves accuracy for the lowest-performing classes. Lastly, we show that using the same pretraining and downstream dataset yields the best result for low-budget self-supervised pretraining. Our proposed Selective Masking Image Reconstruction method provides an effective and practical solution to improve end-to-end semantic segmentation workflows, especially for scenarios that require limited model capacity to meet inference speed and computational resource requirements.
Problem

Research questions and friction points this paper is trying to address.

Improves semantic segmentation via selective masking pretraining
Outperforms random masking and supervised pretraining on datasets
Enhances accuracy for low-performing classes in segmentation tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Selective masking replaces random masking in self-supervised learning
Iterative reconstruction leverages trained model knowledge for masking
Method improves segmentation accuracy for low-performing classes
🔎 Similar Papers
No similar papers found.