What do people want to fact-check?

📅 2026-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses a critical gap in misinformation research, which has predominantly focused on the supply side while overlooking the authentic fact-checking demands of the public under conditions of free choice. For the first time, the authors conduct a large-scale empirical analysis of nearly 2,500 open-ended fact-checking requests submitted by 457 participants, systematically coding them across five semantic dimensions: domain, cognitive form, verifiability, target entity, and temporal reference. The findings reveal that users tend to submit simple descriptive claims, with approximately one-quarter involving non-empirically verifiable content. Moreover, the structural characteristics of these real-world requests differ significantly from those in mainstream benchmark datasets such as FEVER, exposing a systematic mismatch between current AI-driven fact-checking systems and actual user needs.

Technology Category

Application Category

📝 Abstract
Research on misinformation has focused almost exclusively on supply, asking what falsehoods circulate, who produces them, and whether corrections work. A basic demand-side question remains unanswered. When ordinary people can fact-check anything they want, what do they actually ask about? We provide the first large-scale evidence on this question by analyzing close to 2{,}500 statements submitted by 457 participants to an open-ended AI fact-checking system. Each claim is classified along five semantic dimensions (domain, epistemic form, verifiability, target entity, and temporal reference), producing a behavioral map of public verification demand. Three findings stand out. First, users range widely across topics but default to a narrow epistemic repertoire, overwhelmingly submitting simple descriptive claims about present-day observables. Second, roughly one in four requests concerns statements that cannot be empirically resolved, including moral judgments, speculative predictions, and subjective evaluations, revealing a systematic mismatch between what users seek from fact-checking tools and what such tools can deliver. Third, comparison with the FEVER benchmark dataset exposes sharp structural divergences across all five dimensions, indicating that standard evaluation corpora encode a synthetic claim environment that does not resemble real-world verification needs. These results reframe fact-checking as a demand-driven problem and identify where current AI systems and benchmarks are misaligned with the uncertainty people actually experience.
Problem

Research questions and friction points this paper is trying to address.

fact-checking
misinformation
demand-side
verification
AI systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

fact-checking demand
semantic classification
verifiability mismatch
AI benchmark evaluation
user behavior analysis
🔎 Similar Papers
No similar papers found.