Is LLMs Hallucination Usable? LLM-based Negative Reasoning for Fake News Detection

📅 2025-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from knowledge hallucinations that lead to unreliable reasoning, undermining their utility in fact-sensitive tasks like fake news detection. Method: This paper introduces SR³, the first supervised self-reinforced reasoning correction framework that repurposes hallucination-induced erroneous (i.e., negative) reasoning as a discriminative signal. It constructs NRFE—a semantic consistency representation model trained on positive/negative news–reasoning pairs—and distills it into a lightweight student model, NRFE-D. The approach integrates reflective reasoning, multi-stage prompting, semantic consistency modeling, and supervised self-reinforcement learning—bypassing label dependency and its associated biases. Results: Evaluated on three benchmark fake news datasets, SR³ significantly outperforms LLM prompting, fine-tuned small models, and existing state-of-the-art methods, demonstrating that leveraging negative reasoning substantially enhances detection robustness and generalization.

Technology Category

Application Category

📝 Abstract
The questionable responses caused by knowledge hallucination may lead to LLMs' unstable ability in decision-making. However, it has never been investigated whether the LLMs' hallucination is possibly usable to generate negative reasoning for facilitating the detection of fake news. This study proposes a novel supervised self-reinforced reasoning rectification approach - SR$^3$ that yields both common reasonable reasoning and wrong understandings (negative reasoning) for news via LLMs reflection for semantic consistency learning. Upon that, we construct a negative reasoning-based news learning model called - emph{NRFE}, which leverages positive or negative news-reasoning pairs for learning the semantic consistency between them. To avoid the impact of label-implicated reasoning, we deploy a student model - emph{NRFE-D} that only takes news content as input to inspect the performance of our method by distilling the knowledge from emph{NRFE}. The experimental results verified on three popular fake news datasets demonstrate the superiority of our method compared with three kinds of baselines including prompting on LLMs, fine-tuning on pre-trained SLMs, and other representative fake news detection methods.
Problem

Research questions and friction points this paper is trying to address.

Explores usability of LLMs' hallucination for fake news detection.
Proposes SR^3 for generating negative reasoning via LLMs reflection.
Develops NRFE model using positive-negative reasoning pairs for learning.
Innovation

Methods, ideas, or system contributions that make the work stand out.

SR^3: Supervised self-reinforced reasoning rectification approach
NRFE: Negative reasoning-based news learning model
NRFE-D: Student model for knowledge distillation
🔎 Similar Papers
No similar papers found.
Chaowei Zhang
Chaowei Zhang
Department of Computer Science at Yangzhou University
Natural Language ProcessingData MiningParallel Computing
Z
Zongling Feng
Yangzhou University
Z
Zewei Zhang
Auburn University
Jipeng Qiang
Jipeng Qiang
Yangzhou University
Data miningNLP
G
Guandong Xu
The Education University of Hong Kong
Y
Yun Li
Yangzhou University