🤖 AI Summary
To address the high computational cost of pretraining data selection—typically requiring expensive retraining—the paper proposes a zero-training-cost data filtering framework. Methodologically, it establishes, for the first time, a statistical correlation model between perplexity and downstream task performance; leverages lightweight ensemble evaluation across 90 open-source LLMs on text spanning tens of thousands of web domains; and selects high-quality pretraining data via this correlation-driven criterion. Crucially, the approach eliminates all large-model retraining, drastically reducing computational overhead. In experiments at the 160M-scale, the framework outperforms DSIR on all eight downstream benchmarks and matches the performance of DataComp-LM’s manually designed optimal binary classifier. These results empirically validate perplexity–performance correlation as an effective, generalizable proxy metric for pretraining data quality assessment.
📝 Abstract
Quality pretraining data is often seen as the key to high-performance language models. However, progress in understanding pretraining data has been slow due to the costly pretraining runs required for data selection experiments. We present a framework that avoids these costs and selects high-quality pretraining data without any LLM training of our own. Our work is based on a simple observation: LLM losses on many pretraining texts are correlated with downstream benchmark performance, and selecting high-correlation documents is an effective pretraining data selection method. We build a new statistical framework for data selection centered around estimates of perplexity-benchmark correlations and perform data selection using a sample of 90 LLMs taken from the Open LLM Leaderboard on texts from tens of thousands of web domains. In controlled pretraining experiments at the 160M parameter scale on 8 benchmarks, our approach outperforms DSIR on every benchmark, while matching the best data selector found in DataComp-LM, a hand-engineered bigram classifier.