Towards Integrating Uncertainty for Domain-Agnostic Segmentation

๐Ÿ“… 2025-12-29
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the poor robustness and weak zero-shot transfer capability of segmentation foundation models (e.g., SAM) in cross-domain scenarios, this paper proposes an uncertainty-aware domain-agnostic segmentation framework. Methodologically, it introduces (1) UncertSAMโ€”the first benchmark for segmentation robustness under multiple challenging domain shifts, covering eight difficult categories including shadow, transparency, and camouflage; (2) the first empirical validation that lightweight, posterior-only Laplace approximation over the final layer efficiently estimates predictive uncertainty; and (3) theoretical and empirical evidence that uncertainty strongly correlates with segmentation error and effectively guides prediction refinement and zero-shot domain adaptation. Experiments demonstrate substantial improvements in out-of-distribution generalization and model interpretability. The code and benchmark are publicly released.

Technology Category

Application Category

๐Ÿ“ Abstract
Foundation models for segmentation such as the Segment Anything Model (SAM) family exhibit strong zero-shot performance, but remain vulnerable in shifted or limited-knowledge domains. This work investigates whether uncertainty quantification can mitigate such challenges and enhance model generalisability in a domain-agnostic manner. To this end, we (1) curate UncertSAM, a benchmark comprising eight datasets designed to stress-test SAM under challenging segmentation conditions including shadows, transparency, and camouflage; (2) evaluate a suite of lightweight, post-hoc uncertainty estimation methods; and (3) assess a preliminary uncertainty-guided prediction refinement step. Among evaluated approaches, a last-layer Laplace approximation yields uncertainty estimates that correlate well with segmentation errors, indicating a meaningful signal. While refinement benefits are preliminary, our findings underscore the potential of incorporating uncertainty into segmentation models to support robust, domain-agnostic performance. Our benchmark and code are made publicly available.
Problem

Research questions and friction points this paper is trying to address.

Enhancing segmentation model robustness in challenging, domain-agnostic conditions.
Evaluating uncertainty quantification methods to improve model generalization.
Addressing performance gaps in shifted or limited-knowledge segmentation domains.
Innovation

Methods, ideas, or system contributions that make the work stand out.

UncertSAM benchmark tests segmentation under challenging conditions
Lightweight post-hoc uncertainty estimation methods are evaluated
Uncertainty-guided prediction refinement enhances model generalisability
๐Ÿ”Ž Similar Papers
No similar papers found.
J
Jesse Brouwers
UvA-Bosch Delta Lab, University of Amsterdam
X
Xiaoyan Xing
UvA-Bosch Delta Lab, University of Amsterdam
Alexander Timans
Alexander Timans
University of Amsterdam
machine learningprobabilistic inferenceuncertainty quantificationconformal prediction