🤖 AI Summary
This work addresses the unclear trade-off between parametric knowledge acquired through pretraining and non-parametric knowledge accessed via retrieval under a fixed data budget. Systematically investigating the interplay among model scale (30M–3B parameters), pretraining corpus size (up to 100B tokens), and retrieval corpus scale, the study proposes the first unified three-dimensional scaling manifold that jointly models these factors. Built upon the OLMo-2 architecture and evaluated across diverse tasks—including reasoning, scientific question answering, and open-domain QA—the research quantifies how the marginal utility of retrieval depends on model size, task type, and pretraining saturation. Results demonstrate that retrieval consistently enhances performance across all model scales and provides empirical guidance for optimal data allocation between pretraining and retrieval components.
📝 Abstract
Retrieval-augmented generation (RAG) improves language model (LM) performance by providing relevant context at test time for knowledge-intensive situations. However, the relationship between parametric knowledge acquired during pretraining and non-parametric knowledge accessed via retrieval remains poorly understood, especially under fixed data budgets. In this work, we systematically study the trade-off between pretraining corpus size and retrieval store size across a wide range of model and data scales. We train OLMo-2-based LMs ranging from 30M to 3B parameters on up to 100B tokens of DCLM data, while varying both pretraining data scale (1-150x the number of parameters) and retrieval store size (1-20x), and evaluate performance across a diverse suite of benchmarks spanning reasoning, scientific QA, and open-domain QA. We find that retrieval consistently improves performance over parametric-only baselines across model scales and introduce a three-dimensional scaling framework that models performance as a function of model size, pretraining tokens, and retrieval corpus size. This scaling manifold enables us to estimate optimal allocations of a fixed data budget between pretraining and retrieval, revealing that the marginal utility of retrieval depends strongly on model scale, task type, and the degree of pretraining saturation. Our results provide a quantitative foundation for understanding when and how retrieval should complement pretraining, offering practical guidance for allocating data resources in the design of scalable language modeling systems.