ConformalSAM: Unlocking the Potential of Foundational Segmentation Models in Semi-Supervised Semantic Segmentation with Conformal Prediction

📅 2025-07-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address label scarcity in semi-supervised semantic segmentation (SSSS), this paper proposes ConformalSAM—the first framework to integrate conformal prediction into domain adaptation calibration of foundational segmentation models, thereby enhancing the reliability of pseudo-labels for unlabeled images. Methodologically, it leverages SEEM to generate initial pixel-level masks, applies conformal prediction for uncertainty quantification and high-confidence pixel selection, and introduces a self-dependent training strategy to mitigate overfitting to pseudo-labels. Evaluated on three standard SSSS benchmarks, ConformalSAM consistently outperforms state-of-the-art methods. Moreover, as a plug-and-play module, it robustly improves the performance of diverse mainstream approaches. This work empirically validates the effectiveness and generalizability of the “foundation model + statistical calibration” paradigm for weakly supervised vision tasks.

Technology Category

Application Category

📝 Abstract
Pixel-level vision tasks, such as semantic segmentation, require extensive and high-quality annotated data, which is costly to obtain. Semi-supervised semantic segmentation (SSSS) has emerged as a solution to alleviate the labeling burden by leveraging both labeled and unlabeled data through self-training techniques. Meanwhile, the advent of foundational segmentation models pre-trained on massive data, has shown the potential to generalize across domains effectively. This work explores whether a foundational segmentation model can address label scarcity in the pixel-level vision task as an annotator for unlabeled images. Specifically, we investigate the efficacy of using SEEM, a Segment Anything Model (SAM) variant fine-tuned for textual input, to generate predictive masks for unlabeled data. To address the shortcomings of using SEEM-generated masks as supervision, we propose ConformalSAM, a novel SSSS framework which first calibrates the foundation model using the target domain's labeled data and then filters out unreliable pixel labels of unlabeled data so that only high-confidence labels are used as supervision. By leveraging conformal prediction (CP) to adapt foundation models to target data through uncertainty calibration, ConformalSAM exploits the strong capability of the foundational segmentation model reliably which benefits the early-stage learning, while a subsequent self-reliance training strategy mitigates overfitting to SEEM-generated masks in the later training stage. Our experiment demonstrates that, on three standard benchmarks of SSSS, ConformalSAM achieves superior performance compared to recent SSSS methods and helps boost the performance of those methods as a plug-in.
Problem

Research questions and friction points this paper is trying to address.

Addresses label scarcity in semantic segmentation using foundational models
Calibrates foundational models for reliable pixel-level predictions via conformal prediction
Enhances semi-supervised segmentation by filtering unreliable pseudo-labels
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses SEEM for generating predictive masks
Applies conformal prediction for uncertainty calibration
Combines self-reliance training to prevent overfitting
🔎 Similar Papers
No similar papers found.
D
Danhui Chen
Dalian University of Technology
Ziquan Liu
Ziquan Liu
Assistant Professor, Queen Mary University of London
machine learning
C
Chuxi Yang
Dalian University of Technology
D
Dan Wang
Dalian University of Technology
Y
Yan Yan
Washington State University
Y
Yi Xu
Dalian University of Technology
X
Xiangyang Ji
Tsinghua University