🤖 AI Summary
Weak zero-shot generalization remains a critical challenge in social media misinformation detection. To address this, we propose Persuasion-Enhanced Chain-of-Thought (PCoT), the first method to explicitly integrate psychological knowledge of persuasive fallacies into large language models’ zero-shot reasoning. PCoT introduces two timely, out-of-distribution benchmark datasets—EUDisinfo and MultiDis—comprising previously unseen misinformation instances, and designs a persuasion-theory-informed prompt template with a structured chain-of-thought reasoning framework. The approach is model-agnostic and compatible with diverse mainstream LLMs. Extensive experiments across five LLMs and five benchmarks demonstrate an average accuracy improvement of 15%, significantly enhancing zero-shot detection of novel misinformation. PCoT establishes a new paradigm for interpretable, theory-driven misinformation detection grounded in cognitive and rhetorical principles.
📝 Abstract
Disinformation detection is a key aspect of media literacy. Psychological studies have shown that knowledge of persuasive fallacies helps individuals detect disinformation. Inspired by these findings, we experimented with large language models (LLMs) to test whether infusing persuasion knowledge enhances disinformation detection. As a result, we introduce the Persuasion-Augmented Chain of Thought (PCoT), a novel approach that leverages persuasion to improve disinformation detection in zero-shot classification. We extensively evaluate PCoT on online news and social media posts. Moreover, we publish two novel, up-to-date disinformation datasets: EUDisinfo and MultiDis. These datasets enable the evaluation of PCoT on content entirely unseen by the LLMs used in our experiments, as the content was published after the models' knowledge cutoffs. We show that, on average, PCoT outperforms competitive methods by 15% across five LLMs and five datasets. These findings highlight the value of persuasion in strengthening zero-shot disinformation detection.