Show me the evidence: Evaluating the role of evidence and natural language explanations in AI-supported fact-checking

πŸ“… 2026-01-16
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study addresses a critical gap in understanding how non-expert users leverage evidence when interacting with AI-assisted fact-checking systems, particularly in conjunction with natural language explanations. Through a controlled experiment manipulating AI explanation types, prediction confidence levels, and recommendation correctness, the research systematically examines users’ evidence-seeking behaviors. Integrating quantitative behavioral data with qualitative interviews, the work reveals that evidence serves as a pivotal cue for users to assess the reliability of AI judgments. Notably, when explanations are insufficient, users proactively consult evidence and infer its source credibility. Furthermore, the synergistic combination of evidence and natural language explanations significantly enhances decision support effectiveness. These findings underscore the need to optimize both the presentation of evidence and interactive mechanisms to better support user judgment in AI-assisted fact-checking.

Technology Category

Application Category

πŸ“ Abstract
Although much research has focused on AI explanations to support decisions in complex information-seeking tasks such as fact-checking, the role of evidence is surprisingly under-researched. In our study, we systematically varied explanation type, AI prediction certainty, and correctness of AI system advice for non-expert participants, who evaluated the veracity of claims and AI system predictions. Participants were provided the option of easily inspecting the underlying evidence. We found that participants consistently relied on evidence to validate AI claims across all experimental conditions. When participants were presented with natural language explanations, evidence was used less frequently although they relied on it when these explanations seemed insufficient or flawed. Qualitative data suggests that participants attempted to infer evidence source reliability, despite source identities being deliberately omitted. Our results demonstrate that evidence is a key ingredient in how people evaluate the reliability of information presented by an AI system and, in combination with natural language explanations, offers valuable support for decision-making. Further research is urgently needed to understand how evidence ought to be presented and how people engage with it in practice.
Problem

Research questions and friction points this paper is trying to address.

evidence
fact-checking
AI explanations
human-AI interaction
information reliability
Innovation

Methods, ideas, or system contributions that make the work stand out.

evidence
natural language explanations
AI-supported fact-checking
human-AI interaction
information validation
πŸ”Ž Similar Papers
No similar papers found.