RAGPerf: An End-to-End Benchmarking Framework for Retrieval-Augmented Generation Systems

📅 2026-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the absence of standardized benchmarks for end-to-end performance and quality evaluation in retrieval-augmented generation (RAG) systems. The authors propose a modular and configurable RAG benchmarking framework that, for the first time, enables decoupled assessment of individual pipeline stages—including embedding, indexing, retrieval, reranking, and generation. The framework supports multimodal data, multiple vector databases (e.g., Milvus, Qdrant), and diverse large language models, while accommodating realistic query loads and update patterns. It automatically collects key metrics such as throughput, resource utilization, and accuracy. Experimental results demonstrate that the framework introduces negligible performance overhead while providing comprehensive evaluation of RAG system effectiveness. The implementation is publicly released as open-source software.

Technology Category

Application Category

📝 Abstract
We present the design and implementation of a RAG-based AI system benchmarking (RAGPerf) framework for characterizing the system behaviors of RAG pipelines. To facilitate detailed profiling and fine-grained performance analysis, RAGPerf decouples the RAG workflow into several modular components - embedding, indexing, retrieval, reranking, and generation. RAGPerf offers the flexibility for users to configure the core parameters of each component and examine their impact on the end-to-end query performance and quality. RAGPerf has a workload generator to model real-world scenarios by supporting diverse datasets (e.g., text, pdf, code, and audio), different retrieval and update ratios, and query distributions. RAGPerf also supports different embedding models, major vector databases such as LanceDB, Milvus, Qdrant, Chroma, and Elasticsearch, as well as different LLMs for content generation. It automates the collection of performance metrics (i.e., end-to-end query throughput, host/GPU memory footprint, and CPU/GPU utilization) and accuracy metrics (i.e., context recall, query accuracy, and factual consistency). We demonstrate the capabilities of RAGPerf through a comprehensive set of experiments and open source its codebase at GitHub. Our evaluation shows that RAGPerf incurs negligible performance overhead.
Problem

Research questions and friction points this paper is trying to address.

Retrieval-Augmented Generation
Benchmarking
Performance Evaluation
RAG Systems
End-to-End Analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

RAG benchmarking
modular RAG pipeline
end-to-end performance analysis
vector database evaluation
retrieval-augmented generation
🔎 Similar Papers
No similar papers found.