Multi-agent Undercover Gaming: Hallucination Removal via Counterfactual Test for Multimodal Reasoning

📅 2025-11-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from hallucination-induced unreliability in multimodal reasoning, and existing multi-agent debate (MAD) paradigms unrealistically assume all agents possess rationality and reflective capability. Method: We propose a Multi-Agent Undercover Game framework that introduces counterfactual image perturbation to generate verifiable factual evidence, and designs a dynamic, evidence-driven active reasoning mechanism enabling agents to collaboratively identify and exclude hallucinating participants during deliberation. Contribution/Results: Our approach breaks from traditional consensus-based reasoning by enabling real-time cross-modal evidence verification and socially grounded inference. Experiments demonstrate significant improvements in hallucination detection accuracy, decision reliability, and interpretability. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Hallucination continues to pose a major obstacle in the reasoning capabilities of large language models (LLMs). Although the Multi-Agent Debate (MAD) paradigm offers a promising solution by promoting consensus among multiple agents to enhance reliability, it relies on the unrealistic assumption that all debaters are rational and reflective, which is a condition that may not hold when agents themselves are prone to hallucinations. To address this gap, we introduce the Multi-agent Undercover Gaming (MUG) protocol, inspired by social deduction games like"Who is Undercover?". MUG reframes MAD as a process of detecting"undercover"agents (those suffering from hallucinations) by employing multimodal counterfactual tests. Specifically, we modify reference images to introduce counterfactual evidence and observe whether agents can accurately identify these changes, providing ground-truth for identifying hallucinating agents and enabling robust, crowd-powered multimodal reasoning. MUG advances MAD protocols along three key dimensions: (1) enabling factual verification beyond statistical consensus through counterfactual testing; (2) introducing cross-evidence reasoning via dynamically modified evidence sources instead of relying on static inputs; and (3) fostering active reasoning, where agents engage in probing discussions rather than passively answering questions. Collectively, these innovations offer a more reliable and effective framework for multimodal reasoning in LLMs. The source code can be accessed at https://github.com/YongLD/MUG.git.
Problem

Research questions and friction points this paper is trying to address.

Addresses hallucination issues in large language models during multimodal reasoning
Improves multi-agent debate by identifying hallucinating agents through counterfactual tests
Enables reliable multimodal reasoning using modified visual evidence and active discussions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces counterfactual tests to detect hallucinating agents
Modifies reference images for cross-evidence reasoning
Enables active probing discussions instead of passive answers
🔎 Similar Papers
No similar papers found.