🤖 AI Summary
This work addresses the limitations of existing retrieval-augmented generation (RAG) evaluation benchmarks, which struggle to comprehensively assess critical capabilities—such as evidence integration, multi-hop reasoning, logical judgment, table comprehension, and refusal behavior—under unified conditions. To this end, the authors introduce a holistic benchmark that encompasses these five core dimensions and, for the first time, enables multidimensional composite evaluation. The benchmark features human-crafted bilingual (Japanese–English) questions containing fictional entities, ensuring that correct answers strictly depend on retrieved content. Fine-grained automatic scoring is achieved through a combination of manual annotation and LLM-as-a-Judge validation. Experimental results reveal that both leading commercial APIs and open-source models fall short of 90% accuracy overall, exposing significant deficiencies in current RAG systems when handling complex tasks and providing quantitative guidance for model selection and specialized architecture design.
📝 Abstract
Retrieval-Augmented Generation (RAG) is a framework in which a Generator, such as a Large Language Model (LLM), produces answers by retrieving documents from an external collection using a Retriever. In practice, Generators must integrate evidence from long contexts, perform multi-step reasoning, interpret tables, and abstain when evidence is missing. However, existing benchmarks for Generators provide limited coverage, with none enabling simultaneous evaluation of multiple capabilities under unified conditions. To bridge the gap between existing evaluations and practical use, we introduce LIT-RAGBench (the Logic, Integration, Table, Reasoning, and Abstention RAG Generator Benchmark), which defines five categories: Integration, Reasoning, Logic, Table, and Abstention, each further divided into practical evaluation aspects. LIT-RAGBench systematically covers patterns combining multiple aspects across categories. By using fictional entities and scenarios, LIT-RAGBench evaluates answers grounded in the provided external documents. The dataset consists of 114 human-constructed Japanese questions and an English version generated by machine translation with human curation. We use LLM-as-a-Judge for scoring and report category-wise and overall accuracy. Across API-based and open-weight models, no model exceeds 90% overall accuracy. By making strengths and weaknesses measurable within each category, LIT-RAGBench serves as a valuable metric for model selection in practical RAG deployments and for building RAG-specialized models. We release LIT-RAGBench, including the dataset and evaluation code, at https://github.com/Koki-Itai/LIT-RAGBench.