🤖 AI Summary
This study investigates the impact mechanisms of explanation errors in autonomous vehicle (AV) decision-making on passenger subjective experience. Method: A driving simulation experiment with 232 participants was conducted, employing Likert-scale assessments and moderated regression modeling to examine how explanation inaccuracies affect user perceptions across varying contextual hazards and task difficulties, while accounting for prior trust and domain expertise. Contribution/Results: Explanation errors significantly degrade passenger comfort, willingness to ride, confidence in AV capabilities, and explanation satisfaction. High-hazard/high-difficulty scenarios not only exhibit strong main effects but also amplify the detrimental impact of errors. Prior trust and technical expertise serve as significant buffering factors. This work provides the first empirical evidence of systematic erosion of trust-related metrics due to explanation inaccuracy, and proposes a “context-adaptive + personalized” explanation design paradigm—advancing both theoretical foundations and engineering pathways for trustworthy human-AV collaboration.
📝 Abstract
Explanations for autonomous vehicle (AV) decisions may build trust, however, explanations can contain errors. In a simulated driving study (n = 232), we tested how AV explanation errors, driving context characteristics (perceived harm and driving difficulty), and personal traits (prior trust and expertise) affected a passenger's comfort in relying on an AV, preference for control, confidence in the AV's ability, and explanation satisfaction. Errors negatively affected all outcomes. Surprisingly, despite identical driving, explanation errors reduced ratings of the AV's driving ability. Severity and potential harm amplified the negative impact of errors. Contextual harm and driving difficulty directly impacted outcome ratings and influenced the relationship between errors and outcomes. Prior trust and expertise were positively associated with outcome ratings. Results emphasize the need for accurate, contextually adaptive, and personalized AV explanations to foster trust, reliance, satisfaction, and confidence. We conclude with design, research, and deployment recommendations for trustworthy AV explanation systems.