🤖 AI Summary
Retrieval-augmented generation (RAG) lacks a theoretical characterization of its finite-sample generalization performance in contextual linear regression.
Method: We model retrieved examples as query-dependent noisy context instances, unifying in-context learning (ICL) and RAG under a single framework; introduce the first unified theoretical framework for RAG—distinguishing uniform versus non-uniform retrieval noise—and integrate statistical learning theory, bias-variance decomposition, and contextual linear regression analysis to support both in-distribution and out-of-distribution corpus modeling.
Contribution/Results: We derive the first finite-sample generalization error upper bound for RAG, proving the existence of an intrinsic generalization error ceiling. Experiments on Natural Questions and TriviaQA empirically validate the theoretically predicted sample-efficiency gap between ICL and RAG. This work fills a fundamental theoretical gap in RAG’s finite-sample generalization analysis and provides an interpretable, theoretically grounded error bound.
📝 Abstract
Retrieval-augmented generation (RAG) has seen many empirical successes in recent years by aiding the LLM with external knowledge. However, its theoretical aspect has remained mostly unexplored. In this paper, we propose the first finite-sample generalization bound for RAG in in-context linear regression and derive an exact bias-variance tradeoff. Our framework views the retrieved texts as query-dependent noisy in-context examples and recovers the classical in-context learning (ICL) and standard RAG as the limit cases. Our analysis suggests that an intrinsic ceiling on generalization error exists on RAG as opposed to the ICL. Furthermore, our framework is able to model retrieval both from the training data and from external corpora by introducing uniform and non-uniform RAG noise. In line with our theory, we show the sample efficiency of ICL and RAG empirically with experiments on common QA benchmarks, such as Natural Questions and TriviaQA.