🤖 AI Summary
This study systematically investigates the consistency of evaluation metrics for XAI saliency maps—specifically LIME, Grad-CAM, and Guided Backpropagation. Through a large-scale user study (N=166), it conducts the first cross-paradigm, unified comparison of three distinct evaluation frameworks: subjective trust, objective improvement in model understanding, and quantitative mathematical metrics. Results reveal substantial inconsistency across these frameworks: Grad-CAM most effectively enhances users’ objective understanding of model behavior, while Guided Backpropagation achieves the highest scores on mathematical fidelity metrics; notably, several widely used mathematical metrics exhibit significant negative correlations with actual user understanding—challenging the prevailing assumption that high mathematical scores imply superior explainability. The study identifies paradigmatic fragmentation as a core issue in XAI evaluation and provides empirical evidence and methodological insights for developing human-centered, multi-dimensional evaluation frameworks grounded in both cognitive validity and technical rigor.
📝 Abstract
Saliency maps are a popular approach for explaining classifications of (convolutional) neural networks. However, it remains an open question as to how best to evaluate salience maps, with three families of evaluation methods commonly being used: subjective user measures, objective user measures, and mathematical metrics. We examine three of the most popular saliency map approaches (viz., LIME, Grad-CAM, and Guided Backpropagation) in a between subject study (N=166) across these families of evaluation methods. We test 1) for subjective measures, if the maps differ with respect to user trust and satisfaction; 2) for objective measures, if the maps increase users' abilities and thus understanding of a model; 3) for mathematical metrics, which map achieves the best ratings across metrics; and 4) whether the mathematical metrics can be associated with objective user measures. To our knowledge, our study is the first to compare several salience maps across all these evaluation methods$-$with the finding that they do not agree in their assessment (i.e., there was no difference concerning trust and satisfaction, Grad-CAM improved users' abilities best, and Guided Backpropagation had the most favorable mathematical metrics). Additionally, we show that some mathematical metrics were associated with user understanding, although this relationship was often counterintuitive. We discuss these findings in light of general debates concerning the complementary use of user studies and mathematical metrics in the evaluation of explainable AI (XAI) approaches.