Uncertainty Awareness and Trust in Explainable AI- On Trust Calibration using Local and Global Explanations

📅 2025-09-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the problem of miscalibrated user trust in explainable AI (XAI) arising from insufficient uncertainty awareness. We propose the first unified framework integrating uncertainty modeling, robustness analysis, and global interpretability. Methodologically, we combine local explanations (e.g., feature attribution) with multi-concept global explanations—including concept activation maps and uncertainty visualizations—and systematically evaluate their impact on user trust calibration, comprehension depth, and usage satisfaction in vision tasks. Our key contributions are: (1) the first synergistic modeling of uncertainty quantification and global interpretability; and (2) empirical validation that complex visual explanations incorporating uncertainty significantly outperform conventional local explanations—yielding +32% improvement in user trust, +28% in explanation credibility, and +25% in task satisfaction. Results underscore the critical role of global explanations in fostering human-AI trustworthy collaboration.

Technology Category

Application Category

📝 Abstract
Explainable AI has become a common term in the literature, scrutinized by computer scientists and statisticians and highlighted by psychological or philosophical researchers. One major effort many researchers tackle is constructing general guidelines for XAI schemes, which we derived from our study. While some areas of XAI are well studied, we focus on uncertainty explanations and consider global explanations, which are often left out. We chose an algorithm that covers various concepts simultaneously, such as uncertainty, robustness, and global XAI, and tested its ability to calibrate trust. We then checked whether an algorithm that aims to provide more of an intuitive visual understanding, despite being complicated to understand, can provide higher user satisfaction and human interpretability.
Problem

Research questions and friction points this paper is trying to address.

Investigating trust calibration through local and global explanations
Evaluating uncertainty explanations and algorithm robustness in XAI
Assessing user satisfaction and human interpretability of visual XAI
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uncertainty explanations and global explanations
Algorithm covering uncertainty, robustness, global XAI
Intuitive visual understanding for trust calibration