Data Mixing Laws: Optimizing Data Mixtures by Predicting Language Modeling Performance

📅 2024-03-25
🏛️ arXiv.org
📈 Citations: 43
Influential: 4
📄 PDF
🤖 AI Summary
This study addresses the challenge of optimizing data-domain mixing ratios during large language model (LLM) pretraining. Methodologically, it introduces the “data mixing law”—the first generalizable, quantitative functional relationship linking domain mixing proportions to model performance—and integrates this law with established training-step and model-scale scaling laws, yielding a novel multidimensional joint scaling paradigm. The approach combines lightweight proxy training, mixing-ratio sensitivity analysis, data-driven functional fitting, and continual-learning-based dynamic scheduling prediction. Evaluated on the RedPajama dataset, the method achieves a 48% increase in effective training steps for a 1B-parameter model trained on 100B tokens, relative to default mixing. It further precisely identifies critical mixing thresholds that prevent catastrophic forgetting. Crucially, the framework enables accurate extrapolation of large-model performance under arbitrary domain mixtures using only minimal experimental overhead.

Technology Category

Application Category

📝 Abstract
Pretraining data of large language models composes multiple domains (e.g., web texts, academic papers, codes), whose mixture proportions crucially impact the competence of outcome models. While existing endeavors rely on heuristics or qualitative strategies to tune the proportions, we discover the quantitative predictability of model performance regarding the mixture proportions in function forms, which we refer to as the data mixing laws. Fitting such functions on sample mixtures unveils model performance on unseen mixtures before actual runs, thus guiding the selection of an ideal data mixture. Furthermore, we propose nested use of the scaling laws of training steps, model sizes, and our data mixing law to enable predicting the performance of large models trained on massive data under various mixtures with only small-scale training. Moreover, experimental results verify that our method effectively optimizes the training mixture of a 1B model trained for 100B tokens in RedPajama, reaching a performance comparable to the one trained for 48% more steps on the default mixture. Extending the application of data mixing laws to continual training accurately predicts the critical mixture proportion that avoids catastrophic forgetting and outlooks the potential for dynamic data schedules
Problem

Research questions and friction points this paper is trying to address.

Optimizing data mixtures for large language models
Predicting model performance using data mixing laws
Avoiding catastrophic forgetting in continual training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Quantitative predictability of model performance
Nested use of scaling laws for predictions
Dynamic data schedules to prevent forgetting
🔎 Similar Papers
No similar papers found.
Jiasheng Ye
Jiasheng Ye
Fudan University
Large Language ModelsGenerative ModelsAI Scientists
P
Peiju Liu
Fudan University
T
Tianxiang Sun
Fudan University
Yunhua Zhou
Yunhua Zhou
Fudan University
Machine LearningNatural Language Processing
J
Jun Zhan
Fudan University
X
Xipeng Qiu
Fudan University