Midtraining Bridges Pretraining and Posttraining Distributions

📅 2025-10-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Pretraining and instruction tuning exhibit syntactic and task-distribution mismatches, leading to catastrophic forgetting of domain-specific knowledge—particularly in mathematics and code. Method: We introduce high-quality instruction data during the late pretraining phase (“midtraining”) and conduct controlled ablation studies on models trained from scratch, using diverse supervised fine-tuning datasets. Contribution/Results: We provide the first empirical evidence that midtraining functions as an effective domain adaptation technique, substantially mitigating knowledge forgetting in mathematical and programming domains. Its efficacy depends primarily on the timing of intervention—not on the proportion of instruction data mixed into pretraining. Under equal data budgets, midtraining achieves significantly lower domain-specific validation loss compared to continued pretraining. Our findings deliver causal, stage-level insights into training dynamics, establishing midtraining as a principled strategy for aligning pretraining with downstream task distributions.

Technology Category

Application Category

📝 Abstract
Recently, many language models have been pretrained with a"midtraining"phase, in which higher quality, often instruction-formatted data, is mixed in at the end of pretraining. Despite the popularity of this practice, there is little scientific understanding of this phase of model training or why it is effective. In this work, we conduct the first systematic investigation of midtraining through controlled experiments with language models pretrained from scratch and fine-tuned on supervised finetuning datasets in different domains. We find that when compared after supervised fine-tuning, the effectiveness of midtraining is highest in the math and code domains, where midtraining can best reduce the syntactic gap between pretraining and posttraining data. In these cases, midtraining consistently outperforms continued pretraining in both in-domain validation loss as well as pretraining data forgetting after posttraining. We conduct ablations on the starting time of the midtraining phase and mixture weights of the midtraining data, using code midtraining as a case study, and find that timing has a greater impact than mixture weights, with earlier introduction of specialized data, yielding greater benefits in-domain as well as preserving general language modeling better. These findings establish midtraining as a domain adaptation technique that compared to continued pretraining yields better performance through reduced forgetting.
Problem

Research questions and friction points this paper is trying to address.

Midtraining bridges pretraining and posttraining data distribution gaps
It reduces syntactic disparities between pretraining and fine-tuning domains
Midtraining outperforms continued pretraining by minimizing catastrophic forgetting
Innovation

Methods, ideas, or system contributions that make the work stand out.

Midtraining bridges pretraining and posttraining distributions
Midtraining reduces syntactic gap between pretraining and posttraining data
Midtraining as domain adaptation technique reduces forgetting
🔎 Similar Papers
No similar papers found.