Uncertainty-Supervised Interpretable and Robust Evidential Segmentation

๐Ÿ“… 2025-09-21
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
In medical image segmentation, uncertainty estimation suffers from insufficient supervision, limiting interpretability and robustness. To address this, we propose a self-supervised uncertainty modeling framework that introduces three novel anatomically grounded priors: (1) uncertainty is positively correlated with boundary gradient magnitude, (2) sensitivity to input perturbations is quantifiable, and (3) spatial uncertainty distribution adheres to anatomical plausibility. Guided by these priors, we design a gradient- and noise-aware uncertainty supervision loss and formulate an evidential deep learningโ€“based uncertainty quantification metric. Our method requires no additional annotations, preserves state-of-the-art segmentation accuracy, and significantly improves out-of-distribution generalization. Extensive evaluation across multiple medical imaging datasets demonstrates that our uncertainty heatmaps better align with clinical boundary perception, while achieving superior calibration and discrimination compared to existing approaches.

Technology Category

Application Category

๐Ÿ“ Abstract
Uncertainty estimation has been widely studied in medical image segmentation as a tool to provide reliability, particularly in deep learning approaches. However, previous methods generally lack effective supervision in uncertainty estimation, leading to low interpretability and robustness of the predictions. In this work, we propose a self-supervised approach to guide the learning of uncertainty. Specifically, we introduce three principles about the relationships between the uncertainty and the image gradients around boundaries and noise. Based on these principles, two uncertainty supervision losses are designed. These losses enhance the alignment between model predictions and human interpretation. Accordingly, we introduce novel quantitative metrics for evaluating the interpretability and robustness of uncertainty. Experimental results demonstrate that compared to state-of-the-art approaches, the proposed method can achieve competitive segmentation performance and superior results in out-of-distribution (OOD) scenarios while significantly improving the interpretability and robustness of uncertainty estimation. Code is available via https://github.com/suiannaius/SURE.
Problem

Research questions and friction points this paper is trying to address.

Improves uncertainty estimation in medical image segmentation
Enhances interpretability and robustness of deep learning predictions
Addresses lack of effective supervision in uncertainty learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised approach guides uncertainty learning
Two uncertainty supervision losses enhance interpretability
Novel metrics evaluate uncertainty robustness and interpretability
๐Ÿ”Ž Similar Papers
No similar papers found.