Beyond Surface Judgments: Human-Grounded Risk Evaluation of LLM-Generated Disinformation

📅 2026-04-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the widespread practice of using large language models (LLMs) as low-cost proxies to evaluate the impact of their own generated misinformation on human audiences, a practice whose validity remains unverified. For the first time, the authors benchmark LLM judgments against actual human reader feedback through a controlled experiment integrating 290 aligned articles, 2,043 human ratings, and outputs from eight state-of-the-art LLM evaluators. Systematic audits across scoring, ranking, and signal reliance dimensions reveal that LLMs consistently rate content more harshly than humans, exhibit weak item-level ranking correlation with human judgments, and overemphasize logical coherence while penalizing emotional expression. Despite high inter-LLM agreement, systematic discrepancies with human assessments challenge the foundational assumption that LLM consensus reliably proxies human perception.
📝 Abstract
Large language models (LLMs) can generate persuasive narratives at scale, raising concerns about their potential use in disinformation campaigns. Assessing this risk ultimately requires understanding how readers receive such content. In practice, however, LLM judges are increasingly used as a low-cost substitute for direct human evaluation, even though whether they faithfully track reader responses remains unclear. We recast evaluation in this setting as a proxy-validity problem and audit LLM judges against human reader responses. Using 290 aligned articles, 2,043 paired human ratings, and outputs from eight frontier judges, we examine judge--human alignment in terms of overall scoring, item-level ordering, and signal dependence. We find persistent judge--human gaps throughout. Relative to humans, judges are typically harsher, recover item-level human rankings only weakly, and rely on different textual signals, placing more weight on logical rigour while penalizing emotional intensity more strongly. At the same time, judges agree far more with one another than with human readers. These results suggest that LLM judges form a coherent evaluative group that is much more aligned internally than it is with human readers, indicating that internal agreement is not evidence of validity as a proxy for reader response.
Problem

Research questions and friction points this paper is trying to address.

disinformation
large language models
human evaluation
proxy validity
risk assessment
Innovation

Methods, ideas, or system contributions that make the work stand out.

proxy validity
LLM judges
human-grounded evaluation
disinformation detection
judge-human alignment
🔎 Similar Papers
No similar papers found.
Z
Zonghuan Xu
Institute of Trustworthy Embodied AI, Fudan University, Shanghai, China; Shanghai Key Laboratory of Multimodal Embodied AI, Shanghai, China
Xiang Zheng
Xiang Zheng
Department of Computer Science, City University of Hong Kong
Reinforcement LearningTrustworthy AIEmbodied AI
Y
Yutao Wu
Deakin University, Australia
Xingjun Ma
Xingjun Ma
Fudan University
Trustworthy AIMultimodal AIGenerative AIEmbodied AI