🤖 AI Summary
The impending exhaustion of high-quality internet text threatens to impose a fundamental data bottleneck on large language model (LLM) training.
Method: We systematically investigate model scalability under data constraints, conducting large-scale controlled experiments—up to 900B tokens and 9B parameters—complemented by code-data augmentation, deduplication, and filtering analyses.
Contribution/Results: We derive the first computationally optimal scaling law characterizing the diminishing marginal returns of data repetition. Our empirical findings show that up to fourfold repetition incurs negligible performance degradation, whereas excessive repetition nullifies computational gains. Furthermore, mixing in code data and relaxing filtering criteria significantly alleviate data scarcity. To foster reproducibility and advance data-efficient scaling, we open-source all 400 trained models and corresponding datasets, providing both theoretical foundations and practical blueprints for sustainable LLM development.
📝 Abstract
The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations.