🤖 AI Summary
This study challenges the common assumption that AI explanations inherently improve user trust and decision quality. We investigate whether explanations actually promote deliberate adoption of AI recommendations. Using an online experiment with behavioral tracking and statistical modeling, we measure users’ actual engagement with explanations and quantify their impact on decision adjustments. Results reveal that most users only superficially scan explanations without deep comprehension; explanation adoption rates are low, and decision changes are significantly constrained by explanation presentation format and cognitive load. Our key contribution is the empirical identification of an “attention gap” in explainability: exposure does not imply understanding, nor does understanding guarantee adoption. Consequently, we argue that effective explanation design must move beyond static representations toward dynamic, interactive mechanisms that reduce cognitive load and scaffold user reasoning—offering a novel pathway for building trustworthy, human-AI collaborative systems.
📝 Abstract
In the context of AI-based decision support systems, explanations can help users to judge when to trust the AI's suggestion, and when to question it. In this way, human oversight can prevent AI errors and biased decision-making. However, this rests on the assumption that users will consider explanations in enough detail to be able to catch such errors. We conducted an online study on trust in explainable DSS, and were surprised to find that in many cases, participants spent little time on the explanation and did not always consider it in detail. We present an exploratory analysis of this data, investigating what factors impact how carefully study participants consider AI explanations, and how this in turn impacts whether they are open to changing their mind based on what the AI suggests.