Rethinking Data Mixture for Large Language Models: A Comprehensive Survey and New Perspectives

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the optimal allocation of multi-source data domain weights during large language model training under constrained computational resources. We propose a fine-grained taxonomy: offline methods are categorized as heuristic, algorithm-driven, or function-fitting; online methods as online min-max optimization, mixture laws, or optimization alignment—each with rigorously characterized theoretical boundaries and applicability conditions. Integrating optimization theory, statistical modeling, and empirical analysis, we conduct a systematic cross-method comparison and causal attribution study of representative algorithms—including Loshchilov & Hutter scheduling, Dynamix, and MixIT. We introduce the first unified methodology map for data mixing, explicitly identifying performance bottlenecks and contextual suitability across approaches. This work establishes a principled foundation for efficient, robust, and interpretable data weighting—bridging theory and practice in resource-aware LLM training. (149 words)

Technology Category

Application Category

📝 Abstract
Training large language models with data collected from various domains can improve their performance on downstream tasks. However, given a fixed training budget, the sampling proportions of these different domains significantly impact the model's performance. How can we determine the domain weights across different data domains to train the best-performing model within constrained computational resources? In this paper, we provide a comprehensive overview of existing data mixture methods. First, we propose a fine-grained categorization of existing methods, extending beyond the previous offline and online classification. Offline methods are further grouped into heuristic-based, algorithm-based, and function fitting-based methods. For online methods, we categorize them into three groups: online min-max optimization, online mixing law, and other approaches by drawing connections with the optimization frameworks underlying offline methods. Second, we summarize the problem formulations, representative algorithms for each subtype of offline and online methods, and clarify the relationships and distinctions among them. Finally, we discuss the advantages and disadvantages of each method and highlight key challenges in the field of data mixture.
Problem

Research questions and friction points this paper is trying to address.

Optimizing domain weights for diverse data in LLMs
Surveying offline and online data mixture methods
Evaluating trade-offs in data mixture approaches
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-grained categorization of data mixture methods
Offline and online method classification extension
Summarized problem formulations and algorithms
🔎 Similar Papers
No similar papers found.
Y
Yajiao Liu
The Chinese University of Hong Kong, Shenzhen
Congliang Chen
Congliang Chen
Ph.D. Student, the Chinese University of Hong Kong (Shenzhen)
OptimizationMachine Learning
Junchi Yang
Junchi Yang
Chinese University of Hong Kong, Shenzhen
OptimizationMachine Learning
R
Ruoyu Sun
The Chinese University of Hong Kong, Shenzhen; Shenzhen Research Institute of Big Data