Retrieval-Augmented Generation with Conflicting Evidence

📅 2025-04-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Retrieval-augmented generation (RAG) systems struggle to maintain factual consistency under ambiguous queries, conflicting evidence from multiple sources, and co-occurring document noise or misleading information. Method: This paper introduces RAMDocs—a benchmark dataset reflecting realistic complexity—and proposes MADAM-RAG, a multi-agent debate framework. MADAM-RAG jointly models query ambiguity, document misleadingness, and noise interference; it employs iterative large language model (LLM) agent debates for entity disambiguation and error suppression, integrated with ambiguity-aware retrieval, credibility-weighted fusion, and dynamic response aggregation. Contribution/Results: Experiments show MADAM-RAG improves accuracy by 11.40% on AmbigDocs and enhances misleading-information suppression by 15.80% on FaithEval (using Llama3.3-70B). Moreover, it reveals critical limitations of existing RAG methods in handling complex, multi-source conflicts—highlighting the necessity of structured reasoning under uncertainty.

Technology Category

Application Category

📝 Abstract
Large language model (LLM) agents are increasingly employing retrieval-augmented generation (RAG) to improve the factuality of their responses. However, in practice, these systems often need to handle ambiguous user queries and potentially conflicting information from multiple sources while also suppressing inaccurate information from noisy or irrelevant documents. Prior work has generally studied and addressed these challenges in isolation, considering only one aspect at a time, such as handling ambiguity or robustness to noise and misinformation. We instead consider multiple factors simultaneously, proposing (i) RAMDocs (Retrieval with Ambiguity and Misinformation in Documents), a new dataset that simulates complex and realistic scenarios for conflicting evidence for a user query, including ambiguity, misinformation, and noise; and (ii) MADAM-RAG, a multi-agent approach in which LLM agents debate over the merits of an answer over multiple rounds, allowing an aggregator to collate responses corresponding to disambiguated entities while discarding misinformation and noise, thereby handling diverse sources of conflict jointly. We demonstrate the effectiveness of MADAM-RAG using both closed and open-source models on AmbigDocs -- which requires presenting all valid answers for ambiguous queries -- improving over strong RAG baselines by up to 11.40% and on FaithEval -- which requires suppressing misinformation -- where we improve by up to 15.80% (absolute) with Llama3.3-70B-Instruct. Furthermore, we find that RAMDocs poses a challenge for existing RAG baselines (Llama3.3-70B-Instruct only obtains 32.60 exact match score). While MADAM-RAG begins to address these conflicting factors, our analysis indicates that a substantial gap remains especially when increasing the level of imbalance in supporting evidence and misinformation.
Problem

Research questions and friction points this paper is trying to address.

Handling ambiguous queries and conflicting information in RAG systems
Suppressing inaccurate information from noisy or irrelevant documents
Jointly addressing ambiguity, misinformation, and noise in retrieval
Innovation

Methods, ideas, or system contributions that make the work stand out.

RAMDocs dataset simulates complex conflicting evidence scenarios
MADAM-RAG uses multi-agent debate for answer aggregation
Handles ambiguity, misinformation, and noise simultaneously
🔎 Similar Papers
No similar papers found.