Improving Pretraining Data Using Perplexity Correlations

📅 2024-09-09
🏛️ arXiv.org
📈 Citations: 9
Influential: 1
📄 PDF
🤖 AI Summary
To address the high computational cost of pretraining data selection—typically requiring expensive retraining—the paper proposes a zero-training-cost data filtering framework. Methodologically, it establishes, for the first time, a statistical correlation model between perplexity and downstream task performance; leverages lightweight ensemble evaluation across 90 open-source LLMs on text spanning tens of thousands of web domains; and selects high-quality pretraining data via this correlation-driven criterion. Crucially, the approach eliminates all large-model retraining, drastically reducing computational overhead. In experiments at the 160M-scale, the framework outperforms DSIR on all eight downstream benchmarks and matches the performance of DataComp-LM’s manually designed optimal binary classifier. These results empirically validate perplexity–performance correlation as an effective, generalizable proxy metric for pretraining data quality assessment.

Technology Category

Application Category

📝 Abstract
Quality pretraining data is often seen as the key to high-performance language models. However, progress in understanding pretraining data has been slow due to the costly pretraining runs required for data selection experiments. We present a framework that avoids these costs and selects high-quality pretraining data without any LLM training of our own. Our work is based on a simple observation: LLM losses on many pretraining texts are correlated with downstream benchmark performance, and selecting high-correlation documents is an effective pretraining data selection method. We build a new statistical framework for data selection centered around estimates of perplexity-benchmark correlations and perform data selection using a sample of 90 LLMs taken from the Open LLM Leaderboard on texts from tens of thousands of web domains. In controlled pretraining experiments at the 160M parameter scale on 8 benchmarks, our approach outperforms DSIR on every benchmark, while matching the best data selector found in DataComp-LM, a hand-engineered bigram classifier.
Problem

Research questions and friction points this paper is trying to address.

Selects high-quality pretraining data without LLM training.
Uses perplexity-benchmark correlations for effective data selection.
Outperforms existing methods in pretraining experiments across benchmarks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses perplexity-benchmark correlations for data selection
Selects high-quality pretraining data without LLM training
Outperforms existing methods on multiple benchmarks
🔎 Similar Papers
No similar papers found.