FactGuard: Event-Centric and Commonsense-Guided Fake News Detection

πŸ“… 2025-11-13
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing fake news detection methods are vulnerable to style-mimicking attacks, while large language models (LLMs) suffer from shallow reasoning, low practical usability, and high computational costs. To address these challenges, we propose FactGuardβ€”a novel event-centric fake news detection framework. FactGuard weakens reliance on surface-level writing style by extracting core event elements (e.g., participants, actions, temporal-spatial context). It introduces a dynamic usability mechanism that adaptively fuses LLM-based commonsense reasoning and contradiction detection outputs. Furthermore, we employ knowledge distillation to derive FactGuard-D, a lightweight variant enabling cold-start deployment and operation under resource constraints. Extensive experiments on two benchmark datasets demonstrate that FactGuard significantly improves detection accuracy and robustness, effectively mitigating both style sensitivity and the trade-off between LLM capability and practical deployability.

Technology Category

Application Category

πŸ“ Abstract
Fake news detection methods based on writing style have achieved remarkable progress. However, as adversaries increasingly imitate the style of authentic news, the effectiveness of such approaches is gradually diminishing. Recent research has explored incorporating large language models (LLMs) to enhance fake news detection. Yet, despite their transformative potential, LLMs remain an untapped goldmine for fake news detection, with their real-world adoption hampered by shallow functionality exploration, ambiguous usability, and prohibitive inference costs. In this paper, we propose a novel fake news detection framework, dubbed FactGuard, that leverages LLMs to extract event-centric content, thereby reducing the impact of writing style on detection performance. Furthermore, our approach introduces a dynamic usability mechanism that identifies contradictions and ambiguous cases in factual reasoning, adaptively incorporating LLM advice to improve decision reliability. To ensure efficiency and practical deployment, we employ knowledge distillation to derive FactGuard-D, enabling the framework to operate effectively in cold-start and resource-constrained scenarios. Comprehensive experiments on two benchmark datasets demonstrate that our approach consistently outperforms existing methods in both robustness and accuracy, effectively addressing the challenges of style sensitivity and LLM usability in fake news detection.
Problem

Research questions and friction points this paper is trying to address.

Detecting fake news by reducing reliance on writing style imitation
Addressing LLMs' shallow functionality and prohibitive inference costs
Improving decision reliability through dynamic factual reasoning mechanisms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extracts event-centric content using LLMs
Introduces dynamic usability mechanism for contradictions
Employs knowledge distillation for efficient deployment
πŸ”Ž Similar Papers
No similar papers found.
J
Jing He
School of Software and AI, Yunnan University
H
Han Zhang
School of Software and AI, Yunnan University
Y
Yuanhui Xiao
School of Software and AI, Yunnan University
W
Wei Guo
School of Software and AI, Yunnan University
S
Shaowen Yao
School of Software and AI, Yunnan University
Renyang Liu
Renyang Liu
National University of Singapore
AI Security & Data PrivacyMachine UnlearningComputer Vision