🤖 AI Summary
This work addresses the susceptibility of large language models to hallucination in retrieval-augmented generation (RAG), where existing detection methods suffer from confirmation bias due to information sharing between the generator and verifier. To mitigate this, the authors propose a multi-agent collaborative framework comprising a solver, a proposer, and a checker. By deliberately introducing information asymmetry, the framework isolates the original model output to prevent self-confirming bias. It further decomposes claims into atomic propositions and validates them against isolated evidence, leveraging multi-agent reinforcement learning to jointly optimize factual consistency. Evaluated across multiple hallucination benchmarks, the approach significantly reduces hallucination rates, with an 8B-parameter model achieving performance on par with leading closed-source large language models, thereby enabling self-improving factual alignment.
📝 Abstract
Hallucination remains a critical bottleneck for large language models (LLMs), undermining their reliability in real-world applications, especially in Retrieval-Augmented Generation (RAG) systems. While existing hallucination detection methods employ LLM-as-a-judge to verify LLM outputs against retrieved evidence, they suffer from inherent confirmation bias, where the verifier inadvertently reproduces the errors of the original generation. To address this, we introduce Multi-Agent Reinforced Self-Check for Hallucination (MARCH), a framework that enforces rigorous factual alignment by leveraging deliberate information asymmetry. MARCH orchestrates a collaborative pipeline of three specialized agents: a Solver, a Proposer, and a Checker. The Solver generates an initial RAG response, which the Proposer decomposes into claim-level verifiable atomic propositions. Crucially, the Checker validates these propositions against retrieved evidence in isolation, deprived of the Solver's original output. This well-crafted information asymmetry scheme breaks the cycle of self-confirmation bias. By training this pipeline with multi-agent reinforcement learning (MARL), we enable the agents to co-evolve and optimize factual adherence. Extensive experiments across hallucination benchmarks demonstrate that MARCH substantially reduces hallucination rates. Notably, an 8B-parameter LLM equipped with MARCH achieves performance competitive with powerful closed-source models. MARCH paves a scalable path for factual self-improvement of LLMs through co-evolution. The code is at https://github.com/Qwen-Applications/MARCH.