Data Mixing Agent: Learning to Re-weight Domains for Continual Pre-training

📅 2025-07-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To mitigate catastrophic forgetting during continual pretraining of large language models, this paper proposes an automatic, heuristic-free data reweighting method. The core innovation is an end-to-end, reinforcement learning–based data mixing agent that treats source- and target-domain data mixture weights as actions, dynamically optimizing cross-domain generalization policies under evaluation-driven feedback. Crucially, the agent achieves model-agnostic and domain-agnostic zero-shot transferability—enabling direct deployment on unseen tasks, models, and domains without adaptation. Experiments demonstrate substantial improvements over strong baselines in mathematical reasoning and code generation: the method attains superior performance using significantly less source-domain data, validating both its generalizability and practical efficacy.

Technology Category

Application Category

📝 Abstract
Continual pre-training on small-scale task-specific data is an effective method for improving large language models in new target fields, yet it risks catastrophic forgetting of their original capabilities. A common solution is to re-weight training data mixtures from source and target fields on a domain space to achieve balanced performance. Previous domain reweighting strategies rely on manual designation with certain heuristics based on human intuition or empirical results. In this work, we prove that more general heuristics can be parameterized by proposing Data Mixing Agent, the first model-based, end-to-end framework that learns to re-weight domains. The agent learns generalizable heuristics through reinforcement learning on large quantities of data mixing trajectories with corresponding feedback from an evaluation environment. Experiments in continual pre-training on math reasoning show that Data Mixing Agent outperforms strong baselines in achieving balanced performance across source and target field benchmarks. Furthermore, it generalizes well across unseen source fields, target models, and domain spaces without retraining. Direct application to the code generation field also indicates its adaptability across target domains. Further analysis showcases the agents' well-aligned heuristics with human intuitions and their efficiency in achieving superior model performance with less source-field data.
Problem

Research questions and friction points this paper is trying to address.

Balancing performance across source and target domains in continual pre-training
Automating domain reweighting heuristics without manual designation
Generalizing learned heuristics across unseen fields and models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Model-based end-to-end domain reweighting framework
Learns heuristics via reinforcement learning trajectories
Generalizes across unseen fields without retraining
Kailai Yang
Kailai Yang
The University of Manchester
Natural Language ProcessingLarge Language Models
X
Xiao Liu
Microsoft Research
L
Lei Ji
Microsoft Research
H
Hao Li
The University of Manchester
Yeyun Gong
Yeyun Gong
Microsoft Research Asia
Natural Language GenerationQuestion AnsweringPre-training
P
Peng Cheng
Microsoft Research
M
Mao Yang
Microsoft Research