Fostering Appropriate Reliance on Large Language Models: The Role of Explanations, Sources, and Inconsistencies

📅 2025-02-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the problem of user overreliance on large language models (LLMs)—a form of trust miscalibration arising when fluent, seemingly credible outputs conceal factual errors. Using think-aloud heuristics and a preregistered randomized controlled trial (N = 308), we systematically evaluate three interventions—explanatory content, source attribution, and inconsistency cues—on users’ trust calibration. Our key contributions are: (1) the first empirical demonstration that providing explanations alone exacerbates overtrust in incorrect answers; (2) source attribution reduces confidence in erroneous responses by 19.3%; and (3) exposing explanatory inconsistencies increases user skepticism 2.1-fold. These findings motivate a novel “credibility calibration” framework that jointly enhances both trust *and* critical discernment. The work advances theoretical understanding of human-LLM collaboration and provides empirically grounded design principles for trustworthy AI interaction.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) can produce erroneous responses that sound fluent and convincing, raising the risk that users will rely on these responses as if they were correct. Mitigating such overreliance is a key challenge. Through a think-aloud study in which participants use an LLM-infused application to answer objective questions, we identify several features of LLM responses that shape users' reliance: explanations (supporting details for answers), inconsistencies in explanations, and sources. Through a large-scale, pre-registered, controlled experiment (N=308), we isolate and study the effects of these features on users' reliance, accuracy, and other measures. We find that the presence of explanations increases reliance on both correct and incorrect responses. However, we observe less reliance on incorrect responses when sources are provided or when explanations exhibit inconsistencies. We discuss the implications of these findings for fostering appropriate reliance on LLMs.
Problem

Research questions and friction points this paper is trying to address.

Mitigating overreliance on large language models.
Studying effects of explanations on user reliance.
Analyzing impact of sources and inconsistencies.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explanations increase user reliance
Sources reduce incorrect reliance
Inconsistencies decrease incorrect reliance
🔎 Similar Papers
No similar papers found.