Capacity-Aware Mixture Law Enables Efficient LLM Data Optimization

📅 2026-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high cost of optimizing data mixture strategies in large language model training and the limited accuracy of existing scaling laws when extrapolating to larger models. The authors propose an efficient data mixture optimization framework that introduces Capacity-Aware Mixture Law (CAMEL) to model the nonlinear relationship between model scale and data composition. By integrating a mapping from training loss to downstream benchmark performance, the framework enables end-to-end prediction of task-specific outcomes. It supports multi-scale compute budget allocation and demonstrates effectiveness on Mixture-of-Experts architectures. Compared to current approaches, the method reduces optimization costs by 50% while achieving up to a 3% improvement in downstream task performance.

Technology Category

Application Category

📝 Abstract
A data mixture refers to how different data sources are combined to train large language models, and selecting an effective mixture is crucial for optimal downstream performance. Existing methods either conduct costly searches directly on the target model or rely on mixture scaling laws that fail to extrapolate well to large model sizes. We address these limitations by introducing a compute-efficient pipeline for data mixture scaling. First, we propose CAMEL, a capacity-aware mixture law that models validation loss with the nonlinear interplay between model size and mixture. We also introduce a loss-to-benchmark prediction law that estimates benchmark accuracy from validation loss, enabling end-to-end performance prediction for the target model. Next, we study how to allocate a fixed compute budget across model scales to fit the law and reduce prediction error. Finally, we apply our method to Mixture-of-Experts models with up to 7B-A150M parameters to fit the law, and verify the optimal mixture derived from the law by extrapolating to a 55B-A1.2B target model. Compared to prior methods, we reduces mixture optimization costs by 50\% and improves downstream benchmark performance by up to 3\%.
Problem

Research questions and friction points this paper is trying to address.

data mixture
scaling law
large language models
compute efficiency
mixture optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Capacity-Aware Mixture Law
Data Mixture Optimization
Scaling Laws
Loss-to-Benchmark Prediction
Compute-Efficient Pipeline