🤖 AI Summary
This work addresses the lack of theoretical guidance in data mixing strategies for large language model training—particularly challenges arising from ambiguous domain definitions, discrepancies between human and model cognition, and the impact of weighting on generalization. The study is the first to theoretically establish a connection between domain distributions and gradient dynamics, formulating data scheduling as a graph-constrained optimization problem. It introduces DoGraph, a gradient-dynamics-based data reweighting framework that explicitly models inter-domain relationships to derive improved mixing strategies. Experiments across multiple GPT-2 model scales demonstrate that DoGraph significantly enhances model generalization and achieves competitive performance.
📝 Abstract
Data mixing strategy is essential for large language model (LLM) training. Empirical evidence shows that inappropriate strategies can significantly reduce generalization. Although recent methods have improved empirical performance, several fundamental questions remain open: what constitutes a domain, whether human and model perceptions of domains are aligned, and how domain weighting influences generalization. We address these questions by establishing formal connections between gradient dynamics and domain distributions, offering a theoretical framework that clarifies the role of domains in training dynamics. Building on this analysis, we introduce DoGraph, a reweighting framework that formulates data scheduling as a graph-constrained optimization problem. Extensive experiments on GPT-2 models of varying scales demonstrate that DoGraph consistently achieves competitive performance.