Contrastive Explanations That Anticipate Human Misconceptions Can Improve Human Decision-Making Skills

📅 2024-10-05
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
AI decision support often undermines human judgment, primarily because existing explanations adopt a one-sided justificatory stance, neglect users’ cognitive reasoning processes, and fail to explicitly contrast AI predictions with human inference patterns. Method: We propose a human-centered, contrastive explanation framework—the first to integrate models of human misconceptions into explainable AI (XAI). Leveraging counterfactual reasoning and cognitive modeling, it generates bidirectional explanations that both anticipate user misjudgments and highlight discrepancies between human and AI reasoning. Contribution/Results: This end-to-end approach shifts AI design from merely “explaining itself” toward actively “enhancing human judgment.” In a controlled experiment with 628 participants, our contrastive explanations significantly improved users’ independent decision-making capability (p < 0.001) without compromising decision accuracy.

Technology Category

Application Category

📝 Abstract
People's decision-making abilities often fail to improve or may even erode when they rely on AI for decision-support, even when the AI provides informative explanations. We argue this is partly because people intuitively seek contrastive explanations, which clarify the difference between the AI's decision and their own reasoning, while most AI systems offer"unilateral"explanations that justify the AI's decision but do not account for users' thinking. To align human-AI knowledge on decision tasks, we introduce a framework for generating human-centered contrastive explanations that explain the difference between AI's choice and a predicted, likely human choice about the same task. Results from a large-scale experiment (N = 628) demonstrate that contrastive explanations significantly enhance users' independent decision-making skills compared to unilateral explanations, without sacrificing decision accuracy. Amid rising deskilling concerns, our research demonstrates that incorporating human reasoning into AI design can foster human skill development.
Problem

Research questions and friction points this paper is trying to address.

AI explanations often fail to improve human decision-making skills
Contrastive explanations align AI decisions with human reasoning
Human-centered AI design fosters skill development and decision accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates contrastive explanations for AI decisions
Aligns AI choices with predicted human reasoning
Enhances decision-making skills without accuracy loss