🤖 AI Summary
Existing pretraining data filtering relies on heuristic rules, lacking a systematic quality evaluation framework. This work proposes DataMan: a novel framework leveraging reverse prompt engineering to guide LLMs in autonomously identifying 14 data quality criteria and 15 domain labels, establishing the first fine-grained, interpretable quality–domain joint annotation schema. It introduces a perplexity (PPL) anomaly attribution method for quality modeling, uncovering two critical phenomena—weak correlation between quality dimensions and PPL, and misalignment between quality and in-context learning (ICL) performance. Furthermore, it proposes an LLM self-reflective evaluation paradigm. Applied to a 447B-token corpus, DataMan enables full-scale annotation; training a 1.3B-parameter model on only 30B high-quality tokens yields state-of-the-art performance across ICL accuracy, PPL, and instruction-following capability—surpassing uniform sampling baselines trained on 50% more data.
📝 Abstract
The performance emergence of large language models (LLMs) driven by data scaling laws makes the selection of pre-training data increasingly important. However, existing methods rely on limited heuristics and human intuition, lacking comprehensive and clear guidelines. To address this, we are inspired by ``reverse thinking'' -- prompting LLMs to self-identify which criteria benefit its performance. As its pre-training capabilities are related to perplexity (PPL), we derive 14 quality criteria from the causes of text perplexity anomalies and introduce 15 common application domains to support domain mixing. In this paper, we train a Data Manager (DataMan) to learn quality ratings and domain recognition from pointwise rating, and use it to annotate a 447B token pre-training corpus with 14 quality ratings and domain type. Our experiments validate our approach, using DataMan to select 30B tokens to train a 1.3B-parameter language model, demonstrating significant improvements in in-context learning (ICL), perplexity, and instruction-following ability over the state-of-the-art baseline. The best-performing model, based on the Overall Score l=5 surpasses a model trained with 50% more data using uniform sampling. We continue pre-training with high-rated, domain-specific data annotated by DataMan to enhance domain-specific ICL performance and thus verify DataMan's domain mixing ability. Our findings emphasize the importance of quality ranking, the complementary nature of quality criteria, and their low correlation with perplexity, analyzing misalignment between PPL and ICL performance. We also thoroughly analyzed our pre-training dataset, examining its composition, the distribution of quality ratings, and the original document sources.