RAGentA: Multi-Agent Retrieval-Augmented Generation for Attributed Question Answering

📅 2025-06-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address insufficient answer correctness, coverage, relevance, and faithfulness in attribution-based question answering, this paper proposes a multi-agent collaborative retrieval-augmented generation (RAG) framework. The method innovatively integrates BM25 and dense retrieval for hybrid document recall, incorporates iterative document filtering, inline-citation-driven answer generation, and dynamic completeness verification to ensure answer traceability and credibility. It introduces the first end-to-end multi-agent RAG architecture, embedding quantitative faithfulness evaluation and dynamic refinement modules. Experiments on a FineWeb-derived dataset demonstrate significant improvements over standard RAG baselines: Recall@20 increases by 12.5%, answer correctness rises by 1.09 percentage points, and faithfulness improves by 10.72%. These results validate the framework’s effectiveness in enhancing both answer quality and verifiability.

Technology Category

Application Category

📝 Abstract
We present RAGentA, a multi-agent retrieval-augmented generation (RAG) framework for attributed question answering (QA). With the goal of trustworthy answer generation, RAGentA focuses on optimizing answer correctness, defined by coverage and relevance to the question and faithfulness, which measures the extent to which answers are grounded in retrieved documents. RAGentA uses a multi-agent architecture that iteratively filters retrieved documents, generates attributed answers with in-line citations, and verifies completeness through dynamic refinement. Central to the framework is a hybrid retrieval strategy that combines sparse and dense methods, improving Recall@20 by 12.5% compared to the best single retrieval model, resulting in more correct and well-supported answers. Evaluated on a synthetic QA dataset derived from the FineWeb index, RAGentA outperforms standard RAG baselines, achieving gains of 1.09% in correctness and 10.72% in faithfulness. These results demonstrate the effectiveness of the multi-agent architecture and hybrid retrieval in advancing trustworthy QA.
Problem

Research questions and friction points this paper is trying to address.

Optimizing answer correctness and relevance in QA
Ensuring answers are grounded in retrieved documents
Improving retrieval effectiveness with hybrid strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent architecture for iterative document filtering
Hybrid retrieval combining sparse and dense methods
Dynamic refinement for answer completeness verification
🔎 Similar Papers
No similar papers found.