Counterfactual Debating with Preset Stances for Hallucination Elimination of LLMs

📅 2024-06-17
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) frequently exhibit hallucinations due to overconfidence in initial outputs. To address this, we propose the Counterfactual Multi-Agent Debate (CFMAD) framework, which employs stance-preconditioned counterfactual reasoning to compel models to generate justifications for a given answer, and introduces adversarial critics and a neutral third-party arbiter to conduct structured debate—thereby decoupling answer generation from verification and breaking path dependency on initial responses. CFMAD establishes the first three-stage verification paradigm: stance predefinition → structured debate → arbitration—enabling bias-covering hallucination suppression. Extensive experiments across three tasks and four benchmark datasets demonstrate that CFMAD significantly outperforms baselines such as self-correction and diverse sampling, reducing average hallucination rates by 32.7% and substantially improving answer faithfulness.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) excel in various natural language processing tasks but struggle with hallucination issues. Existing solutions have considered utilizing LLMs' inherent reasoning abilities to alleviate hallucination, such as self-correction and diverse sampling methods. However, these methods often overtrust LLMs' initial answers due to inherent biases. The key to alleviating this issue lies in overriding LLMs' inherent biases for answer inspection. To this end, we propose a CounterFactual Multi-Agent Debate (CFMAD) framework. CFMAD presets the stances of LLMs to override their inherent biases by compelling LLMs to generate justifications for a predetermined answer's correctness. The LLMs with different predetermined stances are engaged with a skeptical critic for counterfactual debate on the rationality of generated justifications. Finally, the debate process is evaluated by a third-party judge to determine the final answer. Extensive experiments on four datasets of three tasks demonstrate the superiority of CFMAD over existing methods.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Overconfidence
Error Correction
Innovation

Methods, ideas, or system contributions that make the work stand out.

CFMAD
Multi-Agent Debate
Self-Checking Capability
🔎 Similar Papers
No similar papers found.
Y
Yi Fang
University of Science and Technology of China
Moxin Li
Moxin Li
National University of Singapore
natural language processing
W
Wenjie Wang
National University of Singapore
H
Hui Lin
Electronic Science Research Institute of China Electronics
F
Fuli Feng
University of Science and Technology of China