Bounding Hallucinations: Information-Theoretic Guarantees for RAG Systems via Merlin-Arthur Protocols

📅 2025-12-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing RAG systems treat retrieved results as heuristic prompts rather than verifiable evidence, leading to LLM hallucinations, incorrect answers, or reliance on spurious grounds. Method: We propose the first RAG training framework grounded in the Merlin–Arthur interactive proof protocol: the generator (Arthur) answers reliably using retrieved evidence, abstains when uncertain, or traces answers to sources; the retriever (Merlin) outputs falsifiable evidence. We introduce Explained Information Fraction (EIF) to quantify explanation fidelity and integrate adversarial context injection (Morgana), linear-time XAI attribution, and mutual information analysis for unsupervised abstention and hallucination mitigation. Results: On three RAG benchmarks and across multi-scale models, our approach significantly improves grounding, completeness, reliability, and abstention rate—while reducing hallucination—and simultaneously boosts retrieval recall and MRR.

Technology Category

Application Category

📝 Abstract
Retrieval-augmented generation (RAG) models rely on retrieved evidence to guide large language model (LLM) generators, yet current systems treat retrieval as a weak heuristic rather than verifiable evidence. As a result, LLMs answer without support, hallucinate under incomplete or misleading context, and rely on spurious evidence. We introduce a training framework that treats the entire RAG pipeline -- both the retriever and the generator -- as an interactive proof system via an adaptation of the Merlin-Arthur (M/A) protocol. Arthur (the generator LLM) trains on questions of unkown provenance: Merlin provides helpful evidence, while Morgana injects adversarial, misleading context. Both use a linear-time XAI method to identify and modify the evidence most influential to Arthur. Consequently, Arthur learns to (i) answer when the context support the answer, (ii) reject when evidence is insufficient, and (iii) rely on the specific context spans that truly ground the answer. We further introduce a rigorous evaluation framework to disentangle explanation fidelity from baseline predictive errors. This allows us to introduce and measure the Explained Information Fraction (EIF), which normalizes M/A certified mutual-information guarantees relative to model capacity and imperfect benchmarks. Across three RAG datasets and two model families of varying sizes, M/A-trained LLMs show improved groundedness, completeness, soundness, and reject behavior, as well as reduced hallucinations -- without needing manually annotated unanswerable questions. The retriever likewise improves recall and MRR through automatically generated M/A hard positives and negatives. Our results demonstrate that autonomous interactive-proof-style supervision provides a principled and practical path toward reliable RAG systems that treat retrieved documents not as suggestions, but as verifiable evidence.
Problem

Research questions and friction points this paper is trying to address.

Develop a training framework to reduce hallucinations in RAG systems
Improve RAG models' ability to reject insufficient or misleading evidence
Enhance explanation fidelity and grounding in retrieval-augmented generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training RAG systems using Merlin-Arthur interactive proof protocols
Using adversarial context injection and XAI to identify influential evidence
Introducing Explained Information Fraction to measure certified information guarantees
🔎 Similar Papers
No similar papers found.