Citations and Trust in LLM Generated Responses

📅 2025-01-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how citations influence user trust in large language model (LLM) responses. Method: Through an online real-time Q&A experiment, we randomly injected zero, one, or five citations—either contextually relevant or semantically random—while measuring self-reported trust via validated scales and behavioral trust via citation-click logs. Contribution/Results: We find that citation presence alone significantly increases trust—even random citations do so—yet actual citation verification consistently reduces trust. Based on these findings, we propose and empirically validate the “anti-monitoring framework,” which posits that trust formation relies more on heuristic cues than on veridical content assessment: citations function as strong surface-level validity signals, independent of their factual accuracy or relevance. This challenges the prevailing assumption that verifiability inherently enhances trust, offering a novel cognitive-mechanistic explanation for designing trustworthy AI interactions.

Technology Category

Application Category

📝 Abstract
Question answering systems are rapidly advancing, but their opaque nature may impact user trust. We explored trust through an anti-monitoring framework, where trust is predicted to be correlated with presence of citations and inversely related to checking citations. We tested this hypothesis with a live question-answering experiment that presented text responses generated using a commercial Chatbot along with varying citations (zero, one, or five), both relevant and random, and recorded if participants checked the citations and their self-reported trust in the generated responses. We found a significant increase in trust when citations were present, a result that held true even when the citations were random; we also found a significant decrease in trust when participants checked the citations. These results highlight the importance of citations in enhancing trust in AI-generated content.
Problem

Research questions and friction points this paper is trying to address.

AI Transparency
Trust Enhancement
Evidence-based Answers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Trust Enhancement
AI Interaction
Evidential Impact
🔎 Similar Papers
No similar papers found.