Optimizing Pretraining Data Mixtures with LLM-Estimated Utility

📅 2025-01-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Optimizing data source mixing during large language model (LLM) pretraining—balancing quality, quantity, and diversity under computational and data resource constraints—remains challenging. Method: We propose an automated data mixing framework featuring (1) UtiliMax, a scaling-based ablation heuristic that enhances utility estimation with 10.6× speedup while matching full-ablation performance; and (2) MEDU, a few-shot LLM-based method for estimating per-token data utility, achieving 200× acceleration over human expert baselines. Together, they enable a token-level, multi-objective, resource-constrained data mixing paradigm that is both efficient and robust. Results: Experiments demonstrate that our approach significantly outperforms existing data selection and mixing strategies in low-resource settings, improving both final model performance and interpretability. Notably, the utility assessment becomes intuitive enough for high-school students to grasp, enhancing transparency and pedagogical value.

Technology Category

Application Category

📝 Abstract
Large Language Models improve with increasing amounts of high-quality training data. However, leveraging larger datasets requires balancing quality, quantity, and diversity across sources. After evaluating nine baseline methods under both compute- and data-constrained scenarios, we find token-count heuristics outperform manual and learned mixes, indicating that simple approaches accounting for dataset size and diversity are surprisingly effective. Building on this insight, we propose two complementary approaches: UtiliMax, which extends token-based heuristics by incorporating utility estimates from reduced-scale ablations, achieving up to a 10.6x speedup over manual baselines; and Model Estimated Data Utility (MEDU), which leverages LLMs to estimate data utility from small samples, matching ablation-based performance while reducing computational requirements by $sim$200x. Together, these approaches establish a new framework for automated, compute-efficient data mixing that is robust across training regimes.
Problem

Research questions and friction points this paper is trying to address.

Optimizing Training Data
Resource-constrained Environment
Enhancing Model Performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

UtiliMax
MEDU
Automated Data Optimization
🔎 Similar Papers
No similar papers found.