People Attribute Purpose to Autonomous Vehicles When Explaining Their Behavior: Insights from Cognitive Science for Explainable AI

📅 2024-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how eXplainable AI (XAI) for autonomous driving can align with human cognitive patterns—specifically, how humans naturally interpret autonomous vehicle behavior—and comparatively evaluates teleological, mechanistic, and counterfactual explanation paradigms. Guided by cognitive theory, we conducted a two-stage human factors experiment across 14 representative driving scenarios, yielding HEADD—the first high-quality, human-annotated explanation dataset for autonomous driving. Empirical results show that teleological explanations significantly outperform the others in both comprehensibility and trustworthiness; perceived intentionality emerges as the strongest predictor of explanation quality. Crucially, explanation paradigm is established as a core analytical dimension for XAI design and evaluation. The work advances cognition-informed, empirical XAI research, and we publicly release HEADD alongside associated code, offering a novel paradigm for explainability modeling. (149 words)

Technology Category

Application Category

📝 Abstract
It is often argued that effective human-centered explainable artificial intelligence (XAI) should resemble human reasoning. However, empirical investigations of how concepts from cognitive science can aid the design of XAI are lacking. Based on insights from cognitive science, we propose a framework of explanatory modes to analyze how people frame explanations, whether mechanistic, teleological, or counterfactual. Using the complex safety-critical domain of autonomous driving, we conduct an experiment consisting of two studies on (i) how people explain the behavior of a vehicle in 14 unique scenarios (N1=54) and (ii) how they perceive these explanations (N2=382), curating the novel Human Explanations for Autonomous Driving Decisions (HEADD) dataset. Our main finding is that participants deem teleological explanations significantly better quality than counterfactual ones, with perceived teleology being the best predictor of perceived quality. Based on our results, we argue that explanatory modes are an important axis of analysis when designing and evaluating XAI and highlight the need for a principled and empirically grounded understanding of the cognitive mechanisms of explanation. The HEADD dataset and our code are available at: https://datashare.ed.ac.uk/handle/10283/8930.
Problem

Research questions and friction points this paper is trying to address.

Human-centered explainable AI
Cognitive science insights
Autonomous driving explanations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Teleological explanations improve XAI design
Cognitive science aids explainable AI frameworks
HEADD dataset supports autonomous driving research
🔎 Similar Papers
No similar papers found.
B
Balint Gyevnar
School of Informatics, University of Edinburgh
S
Stephanie Droop
School of Informatics, University of Edinburgh
T
Tadeg Quillien
School of Informatics, University of Edinburgh
S
Shay B. Cohen
School of Informatics, University of Edinburgh
N
Neil R. Bramley
School of Philosophy, Psychology and Language Sciences, University of Edinburgh
Christopher G. Lucas
Christopher G. Lucas
University of Edinburgh
cognitive sciencemachine learningBayesian statisticspsychology
Stefano V. Albrecht
Stefano V. Albrecht
School of Informatics, University of Edinburgh
Artificial IntelligenceAutonomous AgentsMulti-Agent SystemsReinforcement Learning