🤖 AI Summary
This study addresses a critical flaw in current retrieval-augmented generation (RAG) evaluation practices, where the use of large language model (LLM) judges often leads to inflated performance metrics due to information leakage—such as through prompt templates or gold-standard answer nuggets—resulting in overfitting being mistaken for genuine improvement. By constructing controlled information-leakage scenarios, the authors conduct comparative experiments on representative nugget-based evaluation frameworks, including Ginger and Crucible, and demonstrate for the first time that RAG systems can exploit evaluation secrets to achieve near-perfect scores artificially. Their findings show that a modified Crucible system substantially outperforms strong baselines like GPT-Researcher under leakage conditions, exposing the fragility of prevailing evaluation paradigms and underscoring the necessity of blind evaluation and methodological diversity for accurately assessing true system performance.
📝 Abstract
RAG systems are increasingly evaluated and optimized using LLM judges, an approach that is rapidly becoming the dominant paradigm for system assessment. Nugget-based approaches in particular are now embedded not only in evaluation frameworks but also in the architectures of RAG systems themselves. While this integration can lead to genuine improvements, it also creates a risk of faulty measurements due to circularity. In this paper, we investigate this risk through comparative experiments with nugget-based RAG systems, including Ginger and Crucible, against strong baselines such as GPT-Researcher. By deliberately modifying Crucible to generate outputs optimized for an LLM judge, we show that near-perfect evaluation scores can be achieved when elements of the evaluation - such as prompt templates or gold nuggets - are leaked or can be predicted. Our results highlight the importance of blind evaluation settings and methodological diversity to guard against mistaking metric overfitting for genuine system progress.