๐ค AI Summary
As compute capacity grows, the scalability of high-quality pretraining data has become a critical bottleneck. Method: This paper systematically investigates the modeling efficacy of small, filtered, and deduplicated datasets under varying compute budgets. We propose a novel document-level differential resampling paradigm: (i) dynamically assigning repetition counts per document based on quality scores; (ii) jointly optimizing learning rate, batch size, and training epochs; and (iii) applying token-budget-constrained data reweighting. Contribution/Results: Experiments revealโ for the first timeโthat training a strongly filtered dataset for 10 epochs significantly outperforms single-epoch training on an unfiltered dataset ten times larger in size. Under identical token budgets, our approach substantially improves model performance, empirically validating that non-uniform data quality fundamentally alters scaling laws. This work establishes a scalable, quality-centric paradigm for efficient utilization of compact, high-fidelity pretraining corpora.
๐ Abstract
Data filtering has become a powerful tool for improving model performance while reducing computational cost. However, as large language model compute budgets continue to grow, the limited data volume provided by heavily filtered and deduplicated datasets will become a practical constraint. In efforts to better understand how to proceed, we study model performance at various compute budgets and across multiple pre-training datasets created through data filtering and deduplication. We find that, given appropriate modifications to the training recipe, repeating existing aggressively filtered datasets for up to ten epochs can outperform training on the ten times larger superset for a single epoch across multiple compute budget orders of magnitude. While this finding relies on repeating the dataset for many epochs, we also investigate repeats within these datasets at the document level. We find that not all documents within a dataset are equal, and we can create better datasets relative to a token budget by explicitly manipulating the counts of individual documents. We conclude by arguing that even as large language models scale, data filtering remains an important direction of research.