๐ค AI Summary
In medical image segmentation, uncertainty estimation suffers from insufficient supervision, limiting interpretability and robustness. To address this, we propose a self-supervised uncertainty modeling framework that introduces three novel anatomically grounded priors: (1) uncertainty is positively correlated with boundary gradient magnitude, (2) sensitivity to input perturbations is quantifiable, and (3) spatial uncertainty distribution adheres to anatomical plausibility. Guided by these priors, we design a gradient- and noise-aware uncertainty supervision loss and formulate an evidential deep learningโbased uncertainty quantification metric. Our method requires no additional annotations, preserves state-of-the-art segmentation accuracy, and significantly improves out-of-distribution generalization. Extensive evaluation across multiple medical imaging datasets demonstrates that our uncertainty heatmaps better align with clinical boundary perception, while achieving superior calibration and discrimination compared to existing approaches.
๐ Abstract
Uncertainty estimation has been widely studied in medical image segmentation as a tool to provide reliability, particularly in deep learning approaches. However, previous methods generally lack effective supervision in uncertainty estimation, leading to low interpretability and robustness of the predictions. In this work, we propose a self-supervised approach to guide the learning of uncertainty. Specifically, we introduce three principles about the relationships between the uncertainty and the image gradients around boundaries and noise. Based on these principles, two uncertainty supervision losses are designed. These losses enhance the alignment between model predictions and human interpretation. Accordingly, we introduce novel quantitative metrics for evaluating the interpretability and robustness of uncertainty. Experimental results demonstrate that compared to state-of-the-art approaches, the proposed method can achieve competitive segmentation performance and superior results in out-of-distribution (OOD) scenarios while significantly improving the interpretability and robustness of uncertainty estimation. Code is available via https://github.com/suiannaius/SURE.