Concerning Uncertainty -- A Systematic Survey of Uncertainty-Aware XAI

📅 2026-03-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses critical challenges in modeling, integrating, and evaluating uncertainty within explainable artificial intelligence (XAI) to enhance the reliability of explanations and user trust. Through a systematic review of uncertainty-aware XAI methods, it establishes the first unified methodological framework for the field, clarifying how uncertainty is introduced and quantified throughout the explanation pipeline—via approaches such as Bayesian inference, Monte Carlo methods, and conformal prediction—and synthesizing three integration paradigms. The work identifies the fragmented nature of current evaluation practices and proposes cohesive assessment principles that jointly consider uncertainty propagation, robustness, and human decision-making. It further highlights calibration techniques, distribution-agnostic methods, and counterfactual explanations as key frontiers for achieving trustworthy interpretability, with particular emphasis on the impact of explainer variability on explanation credibility.
📝 Abstract
This paper surveys uncertainty-aware explainable artificial intelligence (UAXAI), examining how uncertainty is incorporated into explanatory pipelines and how such methods are evaluated. Across the literature, three recurring approaches to uncertainty quantification emerge (Bayesian, Monte Carlo, and Conformal methods), alongside distinct strategies for integrating uncertainty into explanations: assessing trustworthiness, constraining models or explanations, and explicitly communicating uncertainty. Evaluation practices remain fragmented and largely model centered, with limited attention to users and inconsistent reporting of reliability properties (e.g., calibration, coverage, explanation stability). Recent work leans towards calibration, distribution free techniques and recognizes explainer variability as a central concern. We argue that progress in UAXAI requires unified evaluation principles that link uncertainty propagation, robustness, and human decision-making, and highlight counterfactual and calibration approaches as promising avenues for aligning interpretability with reliability.
Problem

Research questions and friction points this paper is trying to address.

Uncertainty-aware XAI
Explainable AI
Uncertainty quantification
Evaluation framework
Reliability
Innovation

Methods, ideas, or system contributions that make the work stand out.

uncertainty-aware XAI
calibration
counterfactual explanations
conformal methods
explanation reliability
H
Helena Löfström
Jönköping International Business School, Gjuterigatan 5, 553 18 Jönköping, Sweden
Tuwe Löfström
Tuwe Löfström
Department of Computing, Jönköping School of Engineering, Jönköping University
Machine LearningEnsemble LearningConformal PredictionUncertainty QuantificationeXplainable AI
A
Anders Hjort
Eiendomsverdi AS, Nedre Slottsgate 8, 0157 Oslo, Norway
F
Fatima Rabia Yapicioglu
DISI, University of Bologna, Mura Anteo Zamboni 7, 40126 Bologna, Italy; Marketing and Sales, Automobili Lamborghini S.p.A., Via Modena 12, 40019 Sant’Agata Bolognese, Italy