MAIN-RAG: Multi-Agent Filtering Retrieval-Augmented Generation

📅 2024-12-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing RAG systems suffer from unreliable LLM generation due to high retrieval noise and low relevance of retrieved documents. To address this, we propose a training-free multi-agent collaborative filtering framework featuring a novel adaptive thresholding mechanism grounded in consensus among multiple LLM agents: each agent independently scores retrieved documents, and only those achieving cross-agent consensus are retained—enabling joint optimization of retrieval quality and generation reliability in a zero-shot setting. Our method requires no fine-tuning, auxiliary training data, or architectural modifications, ensuring strong generalizability. Evaluated on four open-domain QA benchmarks, it improves answer accuracy by 2–11% over strong baselines, significantly reduces irrelevant retrievals, and concurrently enhances response consistency and factual correctness.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are becoming essential tools for various natural language processing tasks but often suffer from generating outdated or incorrect information. Retrieval-Augmented Generation (RAG) addresses this issue by incorporating external, real-time information retrieval to ground LLM responses. However, the existing RAG systems frequently struggle with the quality of retrieval documents, as irrelevant or noisy documents degrade performance, increase computational overhead, and undermine response reliability. To tackle this problem, we propose Multi-Agent Filtering Retrieval-Augmented Generation (MAIN-RAG), a training-free RAG framework that leverages multiple LLM agents to collaboratively filter and score retrieved documents. Specifically, MAIN-RAG introduces an adaptive filtering mechanism that dynamically adjusts the relevance filtering threshold based on score distributions, effectively minimizing noise while maintaining high recall of relevant documents. The proposed approach leverages inter-agent consensus to ensure robust document selection without requiring additional training data or fine-tuning. Experimental results across four QA benchmarks demonstrate that MAIN-RAG consistently outperforms traditional RAG approaches, achieving a 2-11% improvement in answer accuracy while reducing the number of irrelevant retrieved documents. Quantitative analysis further reveals that our approach achieves superior response consistency and answer accuracy over baseline methods, offering a competitive and practical alternative to training-based solutions.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Retrieval-Augmented Generation
Performance Degradation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-Agent
Filtering and Retrieval
Enhanced Accuracy
🔎 Similar Papers
No similar papers found.