🤖 AI Summary
This paper addresses three critical challenges in Retrieval-Augmented Generation (RAG): large language models’ (LLMs) difficulty in accurately identifying relevant background knowledge, properly attributing evidence to retrieved passages, and appropriately abstaining from answering when insufficient support exists. To this end, we introduce the first large-scale, manually annotated grounding benchmark for RAG—comprising 2,366 diverse questions and over 35,000 fine-grained, paragraph-level grounding annotations across hybrid retrieval settings (private documents and web pages). We propose novel evaluation metrics: Grounding Attribution F1, True Positive Abstention Rate, and Relevance-Aware Factuality Score—enabling systematic assessment of model robustness under temporal constraints and sparse private data. Empirical results reveal severe limitations in current LLMs: average factuality scores reach only 60%, best true positive abstention rate is 31%, and peak Grounding Attribution F1 is 58.9%, underscoring fundamental weaknesses in evidence reliance and uncertainty calibration.
📝 Abstract
We present GaRAGe, a large RAG benchmark with human-curated long-form answers and annotations of each grounding passage, allowing a fine-grained evaluation of whether LLMs can identify relevant grounding when generating RAG answers. Our benchmark contains 2366 questions of diverse complexity, dynamism, and topics, and includes over 35K annotated passages retrieved from both private document sets and the Web, to reflect real-world RAG use cases. This makes it an ideal test bed to evaluate an LLM's ability to identify only the relevant information necessary to compose a response, or provide a deflective response when there is insufficient information. Evaluations of multiple state-of-the-art LLMs on GaRAGe show that the models tend to over-summarise rather than (a) ground their answers strictly on the annotated relevant passages (reaching at most a Relevance-Aware Factuality Score of 60%), or (b) deflect when no relevant grounding is available (reaching at most 31% true positive rate in deflections). The F1 in attribution to relevant sources is at most 58.9%, and we show that performance is particularly reduced when answering time-sensitive questions and when having to draw knowledge from sparser private grounding sources.