🤖 AI Summary
Existing RAG evaluation benchmarks overlook the interaction between retrieval difficulty and reasoning depth, failing to capture real-world challenges in multi-hop reasoning and structural complexity. To address this, we propose GRADE—a novel framework that decouples retrieval difficulty from reasoning depth, formalizing a two-dimensional difficulty matrix parameterized by semantic distance and reasoning depth. GRADE synthesizes controllable-difficulty multi-hop QA datasets via knowledge graph extraction, semantic clustering-based completion, and multi-hop path generation. It further introduces dual-dimensional difficulty quantification—comprising generative and retrieval-oriented metrics—to enable fine-grained, interpretable performance diagnostics. Experiments demonstrate that our difficulty metrics strongly correlate with system error rates and significantly enhance cross-domain and cross-model discriminability and root-cause analysis capability for RAG systems.
📝 Abstract
Retrieval-Augmented Generation (RAG) systems are widely adopted in knowledge-intensive NLP tasks, but current evaluations often overlook the structural complexity and multi-step reasoning required in real-world scenarios. These benchmarks overlook key factors such as the interaction between retrieval difficulty and reasoning depth. To address this gap, we propose extsc{GRADE}, a novel evaluation framework that models task difficulty along two orthogonal dimensions: (1) reasoning depth, defined by the number of inference steps (hops), and (2) semantic distance between the query and its supporting evidence. We construct a synthetic multi-hop QA dataset from factual news articles by extracting knowledge graphs and augmenting them through semantic clustering to recover missing links, allowing us to generate diverse and difficulty-controlled queries. Central to our framework is a 2D difficulty matrix that combines generator-side and retriever-side difficulty. Experiments across multiple domains and models show that error rates strongly correlate with our difficulty measures, validating their diagnostic utility. extsc{GRADE} enables fine-grained analysis of RAG performance and provides a scalable foundation for evaluating and improving multi-hop reasoning in real-world applications.