Rethinking Data Mixing from the Perspective of Large Language Models

📅 2026-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of theoretical guidance in data mixing strategies for large language model training—particularly challenges arising from ambiguous domain definitions, discrepancies between human and model cognition, and the impact of weighting on generalization. The study is the first to theoretically establish a connection between domain distributions and gradient dynamics, formulating data scheduling as a graph-constrained optimization problem. It introduces DoGraph, a gradient-dynamics-based data reweighting framework that explicitly models inter-domain relationships to derive improved mixing strategies. Experiments across multiple GPT-2 model scales demonstrate that DoGraph significantly enhances model generalization and achieves competitive performance.
📝 Abstract
Data mixing strategy is essential for large language model (LLM) training. Empirical evidence shows that inappropriate strategies can significantly reduce generalization. Although recent methods have improved empirical performance, several fundamental questions remain open: what constitutes a domain, whether human and model perceptions of domains are aligned, and how domain weighting influences generalization. We address these questions by establishing formal connections between gradient dynamics and domain distributions, offering a theoretical framework that clarifies the role of domains in training dynamics. Building on this analysis, we introduce DoGraph, a reweighting framework that formulates data scheduling as a graph-constrained optimization problem. Extensive experiments on GPT-2 models of varying scales demonstrate that DoGraph consistently achieves competitive performance.
Problem

Research questions and friction points this paper is trying to address.

data mixing
large language models
domain weighting
generalization
domain definition
Innovation

Methods, ideas, or system contributions that make the work stand out.

data mixing
domain reweighting
gradient dynamics
graph-constrained optimization
large language models
🔎 Similar Papers
No similar papers found.
Y
Yuanjian Xu
Hong Kong University of Science and Technology (Guangzhou)
T
Tianze Sun
Harbin Institute of Technology University
C
Changwei Xu
OpenCSG
X
XinLong Zhao
China Mining Group
Jianing Hao
Jianing Hao
The Hong Kong University of Science and Technology (Guangzhou)
Human-AI collaborationTime-series representationVisual analysisRecommendation system
R
Ran Chen
OpenCSG
Y
Yang Liu
China Mining Group
Ruijie Xu
Ruijie Xu
ShanghaiTech University
Machine LearningComputer VisionRLHF
Stephen Chen
Stephen Chen
Associate Professor of Information Technology, York University
metaheuristicsevolutionary computationswarm intelligence
G
Guang Zhang
Hong Kong University of Science and Technology (Guangzhou)