🤖 AI Summary
Current multi-stage training paradigms for large language models (LLMs) hinder causal attribution of design choices across stages. To address this, we introduce EvoLM—a comprehensive model suite systematically dissecting the four canonical training phases: pretraining, continued pretraining, supervised fine-tuning, and reinforcement learning—with over 100 models spanning 1B and 4B parameter scales. EvoLM enables the first fully transparent, reproducible analysis of end-to-end training dynamics, supported by a unified training-evaluation pipeline for ablation studies, large-scale from-scratch training, multi-stage evaluation, and in-domain/out-of-domain generalization assessment. We publicly release all models, stage-specific datasets, and code. Key findings include: (1) continued pretraining serves as a critical bridge enabling performance leaps; (2) excessive training exhibits pronounced diminishing returns; and (3) catastrophic forgetting is effectively mitigated via inter-stage data reweighting. These results provide interpretable, reproducible guidance for downstream practitioners in configuring optimal multi-stage training pipelines.
📝 Abstract
Modern language model (LM) training has been divided into multiple stages, making it difficult for downstream developers to evaluate the impact of design choices made at each stage. We present EvoLM, a model suite that enables systematic and transparent analysis of LMs'training dynamics across pre-training, continued pre-training, supervised fine-tuning, and reinforcement learning. By training over 100 LMs with 1B and 4B parameters from scratch, we rigorously evaluate both upstream (language modeling) and downstream (problem-solving) reasoning capabilities, including considerations of both in-domain and out-of-domain generalization. Key insights highlight the diminishing returns from excessive pre-training and post-training, the importance and practices of mitigating forgetting during domain-specific continued pre-training, the crucial role of continued pre-training in bridging pre-training and post-training phases, and various intricate trade-offs when configuring supervised fine-tuning and reinforcement learning. To facilitate open research and reproducibility, we release all pre-trained and post-trained models, training datasets for all stages, and our entire training and evaluation pipeline.