RAGCap-Bench: Benchmarking Capabilities of LLMs in Agentic Retrieval Augmented Generation Systems

📅 2025-10-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing agentic RAG systems exhibit limited performance on complex multi-hop reasoning tasks and lack systematic evaluation of intermediate reasoning capabilities. Method: We introduce RAGCap-Bench—the first fine-grained, capability-decomposed benchmark for agentic RAG—featuring a taxonomy of intermediate tasks (e.g., planning, retrieval, reasoning) and a comprehensive error classification schema. It incorporates multi-hop question answering and error-pattern identification, grounded in outputs from state-of-the-art agentic RAG systems. Contribution/Results: Empirical analysis reveals strong correlation between intermediate capability scores and end-to-end performance; “slow-thinking” models demonstrate superior accuracy at critical reasoning-chain junctures; top-performing models achieve significant gains on RAGCap-Bench, validating its effectiveness and diagnostic utility for identifying capability bottlenecks in agentic RAG systems.

Technology Category

Application Category

📝 Abstract
Retrieval-Augmented Generation (RAG) mitigates key limitations of Large Language Models (LLMs)-such as factual errors, outdated knowledge, and hallucinations-by dynamically retrieving external information. Recent work extends this paradigm through agentic RAG systems, where LLMs act as agents to iteratively plan, retrieve, and reason over complex queries. However, these systems still struggle with challenging multi-hop questions, and their intermediate reasoning capabilities remain underexplored. To address this, we propose RAGCap-Bench, a capability-oriented benchmark for fine-grained evaluation of intermediate tasks in agentic RAG workflows. We analyze outputs from state-of-the-art systems to identify common tasks and the core capabilities required for their execution, then construct a taxonomy of typical LLM errors to design targeted evaluation questions. Experiments show that "slow-thinking" models with stronger RAGCap performance achieve better end-to-end results, underscoring the benchmark's validity and the importance of enhancing these intermediate capabilities.
Problem

Research questions and friction points this paper is trying to address.

Benchmarks agentic RAG systems' multi-hop reasoning capabilities
Evaluates intermediate reasoning tasks in RAG workflows
Identifies common LLM errors to enhance RAG performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Benchmark evaluates agentic RAG intermediate tasks
Taxonomy identifies LLM errors for targeted questions
Slow-thinking models improve end-to-end RAG performance
🔎 Similar Papers
No similar papers found.