Mid-Training of Large Language Models: A Survey

📅 2025-10-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of a unified theoretical framework and empirical analysis for the mid-training phase—intervening between pretraining and downstream fine-tuning—in large language models (LLMs). We propose the first systematic taxonomy covering data distribution evolution, learning rate annealing scheduling, and long-context extension; explain mid-training efficacy through gradient noise suppression, information bottleneck alleviation, and curriculum learning; and establish a standardized evaluation benchmark with reproducible training guidelines. Experiments demonstrate that mid-training significantly improves model generalization and subsequent fine-tuning efficiency. Our findings provide a structured methodological foundation for continuous LLM capability evolution and highlight open challenges—including data-optimization co-design and dynamic context adaptation—that warrant further investigation.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are typically developed through large-scale pre-training followed by task-specific fine-tuning. Recent advances highlight the importance of an intermediate mid-training stage, where models undergo multiple annealing-style phases that refine data quality, adapt optimization schedules, and extend context length. This stage mitigates diminishing returns from noisy tokens, stabilizes convergence, and expands model capability in late training. Its effectiveness can be explained through gradient noise scale, the information bottleneck, and curriculum learning, which together promote generalization and abstraction. Despite widespread use in state-of-the-art systems, there has been no prior survey of mid-training as a unified paradigm. We introduce the first taxonomy of LLM mid-training spanning data distribution, learning-rate scheduling, and long-context extension. We distill practical insights, compile evaluation benchmarks, and report gains to enable structured comparisons across models. We also identify open challenges and propose avenues for future research and practice.
Problem

Research questions and friction points this paper is trying to address.

Investigating intermediate training stage between pre-training and fine-tuning
Addressing diminishing returns and convergence instability in LLM training
Establishing unified taxonomy and benchmarks for mid-training evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Intermediate mid-training stage with annealing phases
Refining data quality and adapting optimization schedules
Extending context length to expand model capability
🔎 Similar Papers
Kaixiang Mo
Kaixiang Mo
Shopee
Y
Yuxin Shi
Shopee
W
Weiwei Weng
Shopee
Zhiqiang Zhou
Zhiqiang Zhou
Beijing Institute of Technology
Computer VisionInformation Fusion
S
Shuman Liu
Shopee
H
Haibo Zhang
Shopee
A
Anxiang Zeng
Shopee