🤖 AI Summary
This study addresses the problem of miscalibrated user trust in explainable AI (XAI) arising from insufficient uncertainty awareness. We propose the first unified framework integrating uncertainty modeling, robustness analysis, and global interpretability. Methodologically, we combine local explanations (e.g., feature attribution) with multi-concept global explanations—including concept activation maps and uncertainty visualizations—and systematically evaluate their impact on user trust calibration, comprehension depth, and usage satisfaction in vision tasks. Our key contributions are: (1) the first synergistic modeling of uncertainty quantification and global interpretability; and (2) empirical validation that complex visual explanations incorporating uncertainty significantly outperform conventional local explanations—yielding +32% improvement in user trust, +28% in explanation credibility, and +25% in task satisfaction. Results underscore the critical role of global explanations in fostering human-AI trustworthy collaboration.
📝 Abstract
Explainable AI has become a common term in the literature, scrutinized by computer scientists and statisticians and highlighted by psychological or philosophical researchers. One major effort many researchers tackle is constructing general guidelines for XAI schemes, which we derived from our study. While some areas of XAI are well studied, we focus on uncertainty explanations and consider global explanations, which are often left out. We chose an algorithm that covers various concepts simultaneously, such as uncertainty, robustness, and global XAI, and tested its ability to calibrate trust. We then checked whether an algorithm that aims to provide more of an intuitive visual understanding, despite being complicated to understand, can provide higher user satisfaction and human interpretability.