Explainability of AI Uncertainty: Application to Multiple Sclerosis Lesion Segmentation on MRI

📅 2025-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
AI-based uncertainty estimation in MRI cortical lesion segmentation for multiple sclerosis lacks clinical interpretability. Method: We propose the first analytical framework that attributes model prediction uncertainty to clinically interpretable factors—such as lesion size, morphology, and cortical involvement—by integrating deep ensemble-based uncertainty quantification, instance-level uncertainty decomposition, clinical factor association modeling, and out-of-distribution generalization evaluation. Contribution/Results: Our framework establishes statistically significant alignment (p < 0.001) between predicted uncertainty and expert annotator confidence, enabling bidirectional consistency between technical uncertainty and clinical reasoning. Validated on a multicenter dataset comprising 206 patients and nearly 2,000 lesions, it substantially enhances the clinical decision-support value of uncertainty estimates, enabling reliable segmentation both in-distribution and under distributional shift.

Technology Category

Application Category

📝 Abstract
Trustworthy artificial intelligence (AI) is essential in healthcare, particularly for high-stakes tasks like medical image segmentation. Explainable AI and uncertainty quantification significantly enhance AI reliability by addressing key attributes such as robustness, usability, and explainability. Despite extensive technical advances in uncertainty quantification for medical imaging, understanding the clinical informativeness and interpretability of uncertainty remains limited. This study introduces a novel framework to explain the potential sources of predictive uncertainty, specifically in cortical lesion segmentation in multiple sclerosis using deep ensembles. The proposed analysis shifts the focus from the uncertainty-error relationship towards relevant medical and engineering factors. Our findings reveal that instance-wise uncertainty is strongly related to lesion size, shape, and cortical involvement. Expert rater feedback confirms that similar factors impede annotator confidence. Evaluations conducted on two datasets (206 patients, almost 2000 lesions) under both in-domain and distribution-shift conditions highlight the utility of the framework in different scenarios.
Problem

Research questions and friction points this paper is trying to address.

Explaining AI uncertainty sources in MS lesion segmentation
Enhancing clinical interpretability of predictive uncertainty
Assessing uncertainty impact on lesion size and shape
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deep ensembles explain predictive uncertainty sources
Focus on medical and engineering factors, not error
Evaluate lesion size, shape, cortical involvement impact
🔎 Similar Papers
No similar papers found.