Does Explanation Correctness Matter? Linking Computational XAI Evaluation to Human Understanding

📅 2026-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether computational XAI evaluation metrics—such as explanation fidelity—genuinely enhance human understanding of AI decisions. By systematically varying explanation fidelity (55%–100%) in a time-series classification task and employing controlled user studies with a forward simulation paradigm, the work provides the first empirical evidence of a nonlinear relationship between explanation fidelity and human comprehension: understanding significantly deteriorates only when fidelity falls below 70%, yet even perfectly faithful explanations do not guarantee universal user comprehension. Furthermore, self-reported understanding aligns with behavioral performance only under conditions of high fidelity and when users have grasped the underlying decision rule. These findings challenge the prevailing assumption that higher metric scores inherently yield better interpretability, offering a human-centered perspective for rethinking XAI evaluation.

Technology Category

Application Category

📝 Abstract
Explainable AI (XAI) methods are commonly evaluated with functional metrics such as correctness, which computationally estimate how accurately an explanation reflects the model's reasoning. Higher correctness is assumed to produce better human understanding, but this link has not been tested experimentally with controlled levels. We conducted a user study (N=200) that manipulated explanation correctness at four levels (100%, 85%, 70%, 55%) in a time series classification task where participants could not rely on domain knowledge or visual intuition and instead predicted the AI's decisions based on explanations (forward simulation). Correctness affected understanding, but not at every level: performance dropped at 70% and 55% correctness relative to fully correct explanations, while further degradation below 70% produced no additional loss. Rather than shifting performance uniformly, lower correctness decreased the proportion of participants who learned the decision pattern. At the same time, even fully correct explanations did not guarantee understanding, as only a subset of participants achieved high accuracy. Exploratory analyses showed that self-reported ratings correlated with demonstrated performance only when explanations were fully correct and participants had learned the pattern. These findings show that not all differences in functional correctness translate to differences in human understanding, underscoring the need to validate functional metrics against human outcomes.
Problem

Research questions and friction points this paper is trying to address.

Explainable AI
Explanation Correctness
Human Understanding
Functional Metrics
Forward Simulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explanation Correctness
Human Understanding
Explainable AI (XAI)
Forward Simulation
Functional Metrics Validation
🔎 Similar Papers
No similar papers found.