🤖 AI Summary
Large language models (LLMs) suffer from hallucination-induced unreliability in multimodal reasoning, and existing multi-agent debate (MAD) paradigms unrealistically assume all agents possess rationality and reflective capability. Method: We propose a Multi-Agent Undercover Game framework that introduces counterfactual image perturbation to generate verifiable factual evidence, and designs a dynamic, evidence-driven active reasoning mechanism enabling agents to collaboratively identify and exclude hallucinating participants during deliberation. Contribution/Results: Our approach breaks from traditional consensus-based reasoning by enabling real-time cross-modal evidence verification and socially grounded inference. Experiments demonstrate significant improvements in hallucination detection accuracy, decision reliability, and interpretability. The implementation is publicly available.
📝 Abstract
Hallucination continues to pose a major obstacle in the reasoning capabilities of large language models (LLMs). Although the Multi-Agent Debate (MAD) paradigm offers a promising solution by promoting consensus among multiple agents to enhance reliability, it relies on the unrealistic assumption that all debaters are rational and reflective, which is a condition that may not hold when agents themselves are prone to hallucinations. To address this gap, we introduce the Multi-agent Undercover Gaming (MUG) protocol, inspired by social deduction games like"Who is Undercover?". MUG reframes MAD as a process of detecting"undercover"agents (those suffering from hallucinations) by employing multimodal counterfactual tests. Specifically, we modify reference images to introduce counterfactual evidence and observe whether agents can accurately identify these changes, providing ground-truth for identifying hallucinating agents and enabling robust, crowd-powered multimodal reasoning. MUG advances MAD protocols along three key dimensions: (1) enabling factual verification beyond statistical consensus through counterfactual testing; (2) introducing cross-evidence reasoning via dynamically modified evidence sources instead of relying on static inputs; and (3) fostering active reasoning, where agents engage in probing discussions rather than passively answering questions. Collectively, these innovations offer a more reliable and effective framework for multimodal reasoning in LLMs. The source code can be accessed at https://github.com/YongLD/MUG.git.