Debate-to-Detect: Reformulating Misinformation Detection as a Real-World Debate with Large Language Models

📅 2025-05-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing fake news detection methods predominantly rely on static binary classification, failing to capture the dynamic, iterative reasoning inherent in fact-checking; while large language models (LLMs) offer reasoning capabilities, they remain vulnerable to logical inconsistencies and superficial verification biases. To address these limitations, we propose the Multi-Agent Debative (MAD) framework, which reformulates detection as a five-stage structured adversarial debate—comprising opening statements, rebuttals, open debate, closing arguments, and adjudication—executed collaboratively by role-specialized LLM agents. We introduce the first multidimensional evaluation mechanism integrating factual accuracy, source reliability, reasoning quality, clarity, and ethical alignment, transcending conventional classification paradigms. Implemented with GPT-4o, MAD achieves significant performance gains over state-of-the-art baselines on two benchmark fake news datasets. Case studies demonstrate its capacity to iteratively strengthen evidential chains, thereby enhancing decision transparency and robustness.

Technology Category

Application Category

📝 Abstract
The proliferation of misinformation in digital platforms reveals the limitations of traditional detection methods, which mostly rely on static classification and fail to capture the intricate process of real-world fact-checking. Despite advancements in Large Language Models (LLMs) that enhance automated reasoning, their application to misinformation detection remains hindered by issues of logical inconsistency and superficial verification. In response, we introduce Debate-to-Detect (D2D), a novel Multi-Agent Debate (MAD) framework that reformulates misinformation detection as a structured adversarial debate. Inspired by fact-checking workflows, D2D assigns domain-specific profiles to each agent and orchestrates a five-stage debate process, including Opening Statement, Rebuttal, Free Debate, Closing Statement, and Judgment. To transcend traditional binary classification, D2D introduces a multi-dimensional evaluation mechanism that assesses each claim across five distinct dimensions: Factuality, Source Reliability, Reasoning Quality, Clarity, and Ethics. Experiments with GPT-4o on two fakenews datasets demonstrate significant improvements over baseline methods, and the case study highlight D2D's capability to iteratively refine evidence while improving decision transparency, representing a substantial advancement towards robust and interpretable misinformation detection. The code will be open-sourced in a future release.
Problem

Research questions and friction points this paper is trying to address.

Detecting misinformation via dynamic debate instead of static classification
Addressing LLMs' logical inconsistency in misinformation detection
Evaluating claims multi-dimensionally beyond binary classification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-Agent Debate framework for misinformation detection
Five-stage structured adversarial debate process
Multi-dimensional evaluation mechanism for claims
🔎 Similar Papers
No similar papers found.
C
Chen Han
School of Advanced Interdisciplinary Sciences, UCAS; State Key Laboratory of Mathematical Sciences, AMSS, CAS
Wenzhen Zheng
Wenzhen Zheng
Academy of Mathematics and Systems Science, Chinese Academy of Sciences
Large Language Model
X
Xijin Tang
School of Advanced Interdisciplinary Sciences, UCAS; State Key Laboratory of Mathematical Sciences, AMSS, CAS