LLM-GAN: Construct Generative Adversarial Network Through Large Language Models For Explainable Fake News Detection

📅 2024-09-03
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing fake news detection methods underutilize large language models (LLMs), suffering from low accuracy, poor interpretability, high misclassification rates on highly realistic fake news, and heavy reliance on manual verification—resulting in low efficiency. To address these limitations, we propose an LLM-intrinsic adversarial paradigm: leveraging prompt engineering to endow a single LLM with dual roles—as both a generator and an interpretable detector—thereby establishing a closed-loop framework of generation–detection–explanation. Our approach integrates generative adversarial training, interpretability alignment mechanisms, and a cloud-native service architecture. Evaluated on multiple benchmark datasets, our method achieves an 8.2% improvement in prediction accuracy and attains an explanation faithfulness score of 0.87 against human evaluations. The solution has been deployed as a real-time, interpretable fake news detection cloud service.

Technology Category

Application Category

📝 Abstract
Explainable fake news detection predicts the authenticity of news items with annotated explanations. Today, Large Language Models (LLMs) are known for their powerful natural language understanding and explanation generation abilities. However, presenting LLMs for explainable fake news detection remains two main challenges. Firstly, fake news appears reasonable and could easily mislead LLMs, leaving them unable to understand the complex news-faking process. Secondly, utilizing LLMs for this task would generate both correct and incorrect explanations, which necessitates abundant labor in the loop. In this paper, we propose LLM-GAN, a novel framework that utilizes prompting mechanisms to enable an LLM to become Generator and Detector and for realistic fake news generation and detection. Our results demonstrate LLM-GAN's effectiveness in both prediction performance and explanation quality. We further showcase the integration of LLM-GAN to a cloud-native AI platform to provide better fake news detection service in the cloud.
Problem

Research questions and friction points this paper is trying to address.

fake news detection
ultra-large language models
interpretability
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-GAN
Bidirectional Capability
Cloud Integration
🔎 Similar Papers
No similar papers found.
Y
Yifeng Wang
Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University
Zhouhong Gu
Zhouhong Gu
Fudan University
Language ModelingAutomated SocietyModel Editing
Siwei Zhang
Siwei Zhang
ETH Zurich
3D human pose estimationhuman-scene interactions
S
Suhang Zheng
Alibaba Group
T
Tao Wang
Alibaba Group
T
Tianyu Li
Alibaba Group
Hongwei Feng
Hongwei Feng
Fudan University
knowledge management,AI,big data
Y
Yanghua Xiao
Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University