🤖 AI Summary
This paper addresses the lack of a formal, quantitative definition for “variability” in pretraining data quality by introducing the first principled metric—the Diversity Coefficient—grounded in latent concept distribution modeling of natural language data. The method integrates information-theoretic principles, latent concept estimation, and controlled intervention experiments, and is systematically validated across models ranging from 51M to 7B parameters. Contributions include: (1) the first formalization of data variability as a computable, empirically verifiable core data quality metric; (2) causal empirical evidence linking the Diversity Coefficient to downstream task performance, moving beyond heuristic or correlational evaluation paradigms; and (3) validation across 44 models—including GPT-2 and LLaMAv2—demonstrating its strong predictive power for task performance and revealing that major open-source datasets exhibit high formal diversity.
📝 Abstract
Current trends in pre-training Large Language Models (LLMs) primarily focus on the scaling of model and dataset size. While the quality of pre-training data is considered an important factor for training powerful LLMs, it remains a nebulous concept that has not been rigorously characterized. To this end, we propose a formalization of one key aspect of data quality -- measuring the variability of natural language data -- specifically via a measure we call the diversity coefficient. Our empirical analysis shows that the proposed diversity coefficient aligns with the intuitive properties of diversity and variability, e.g., it increases as the number of latent concepts increases. Then, we measure the diversity coefficient of publicly available pre-training datasets and demonstrate that their formal diversity is high compared to theoretical lower and upper bounds. Finally, we conduct a comprehensive set of controlled interventional experiments with GPT-2 and LLaMAv2 that demonstrate the diversity coefficient of pre-training data characterizes useful aspects of downstream model evaluation performance -- totaling 44 models of various sizes (51M to 7B parameters). We conclude that our formal notion of diversity is an important aspect of data quality that captures variability and causally leads to improved evaluation performance.