Optimizing Pre-Training Data Mixtures with Mixtures of Data Expert Models

📅 2025-02-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the challenge of optimizing data mixture ratios during large language model (LLM) pretraining. We propose a loss approximation and regression-guided framework based on Mixture of Data Experts (MDE). Methodologically, we introduce the first application of a multi-expert ensemble to efficiently approximate cross-entropy loss across diverse data mixtures; further, we construct a regression model that takes MDE-derived features as input and downstream task loss as supervision, enabling joint optimization of data composition. Key contributions include: (1) MDE enables high-fidelity, low-overhead loss estimation; and (2) the regression model explicitly links data mixture ratios to downstream generalization performance. Experiments on models ranging from 70M to 1B parameters using the SlimPajama dataset demonstrate substantial improvements over baseline methods relying solely on heuristic mixture ratios—particularly in few-shot evaluation. Theoretically, we prove that MDE aggregation enjoys consistency and convergence guarantees.

Technology Category

Application Category

📝 Abstract
We propose a method to optimize language model pre-training data mixtures through efficient approximation of the cross-entropy loss corresponding to each candidate mixture via a Mixture of Data Experts (MDE). We use this approximation as a source of additional features in a regression model, trained from observations of model loss for a small number of mixtures. Experiments with Transformer decoder-only language models in the range of 70M to 1B parameters on the SlimPajama dataset show that our method achieves significantly better performance than approaches that train regression models using only the mixture rates as input features. Combining this improved optimization method with an objective that takes into account cross-entropy on end task data leads to superior performance on few-shot downstream evaluations. We also provide theoretical insights on why aggregation of data expert predictions can provide good approximations to model losses for data mixtures.
Problem

Research questions and friction points this paper is trying to address.

Optimize pre-training data mixtures
Enhance language model performance
Approximate cross-entropy loss efficiently
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixture of Data Experts
Cross-entropy loss approximation
Regression model features
🔎 Similar Papers
No similar papers found.