Is Conversational XAI All You Need? Human-AI Decision Making With a Conversational XAI Assistant

📅 2025-01-29
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether conversational explainable AI (XAI) interfaces—while enhancing user understanding and trust—also induce overreliance, and proposes balanced design strategies. We conduct a controlled user study comparing an LLM-driven conversational agent against a traditional XAI dashboard, complemented by quantitative behavioral analysis. Our key contribution is the first empirical identification of the “explanation depth illusion”: conversational XAI leads users to overestimate the completeness and rigor of explanations, thereby exacerbating overreliance; critically, LLM-enhanced dialogue amplifies—not mitigates—this risk. Results show that both interfaces significantly improve understanding and trust, yet concurrently elicit substantially higher levels of reliance behavior compared to baseline conditions. These findings provide critical empirical evidence and design warnings for XAI interface development, advancing trustworthy, controllable human-AI collaborative decision-making frameworks.

Technology Category

Application Category

📝 Abstract
Explainable artificial intelligence (XAI) methods are being proposed to help interpret and understand how AI systems reach specific predictions. Inspired by prior work on conversational user interfaces, we argue that augmenting existing XAI methods with conversational user interfaces can increase user engagement and boost user understanding of the AI system. In this paper, we explored the impact of a conversational XAI interface on users' understanding of the AI system, their trust, and reliance on the AI system. In comparison to an XAI dashboard, we found that the conversational XAI interface can bring about a better understanding of the AI system among users and higher user trust. However, users of both the XAI dashboard and conversational XAI interfaces showed clear overreliance on the AI system. Enhanced conversations powered by large language model (LLM) agents amplified over-reliance. Based on our findings, we reason that the potential cause of such overreliance is the illusion of explanatory depth that is concomitant with both XAI interfaces. Our findings have important implications for designing effective conversational XAI interfaces to facilitate appropriate reliance and improve human-AI collaboration. Code can be found at https://github.com/delftcrowd/IUI2025_ConvXAI
Problem

Research questions and friction points this paper is trying to address.

XAI (Explainable Artificial Intelligence)
User Trust
AI Dependency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explainable AI
User Trust
Dialogue Interface
🔎 Similar Papers
No similar papers found.