๐ค AI Summary
Existing perplexity (PPL)-based data filtering methods for large language model pretraining suffer from high computational overhead and poor robustness to noise and out-of-distribution samples. To address these limitations, this paper proposes a model-free token prior probability filtering method. It models lexical density and token role characteristics via corpus-level token frequency statistics, and employs meanโstandard deviation thresholds combined with linguistically motivated heuristics to enable efficient, stable, and inference-free document selection. Compared to PPL-based approaches, our method achieves over 1000ร speedup while attaining state-of-the-art average performance across 20 downstream tasks. Moreover, it demonstrates strong generalization to code, mathematical notation, and multilingual text. The proposed approach significantly enhances the efficiency, robustness, and applicability of data curation for LLM pretraining.
๐ Abstract
As large language models (LLMs) are pretrained on massive web corpora, careful selection of data becomes essential to ensure effective and efficient learning. While perplexity (PPL)-based filtering has shown strong performance, it suffers from drawbacks: substantial time costs and inherent unreliability of the model when handling noisy or out-of-distribution samples. In this work, we propose a simple yet powerful alternative: a prior-based data filtering method that estimates token priors using corpus-level term frequency statistics, inspired by linguistic insights on word roles and lexical density. Our approach filters documents based on the mean and standard deviation of token priors, serving as a fast proxy to PPL while requiring no model inference. Despite its simplicity, the prior-based filter achieves the highest average performance across 20 downstream benchmarks, while reducing time cost by over 1000x compared to PPL-based filtering. We further demonstrate its applicability to symbolic languages such as code and math, and its dynamic adaptability to multilingual corpora without supervision