🤖 AI Summary
To mitigate the high computational cost of large language model pretraining, this work proposes the Late-to-Early Training (LET) paradigm, which systematically explores a novel cross-stage knowledge transfer mechanism from later to earlier training phases. LET enables efficient knowledge distillation by guiding the shallow layers of the target model during early pretraining to emulate the deep representations of a smaller, pre-trained source model. Integrating representation transfer, early-layer guidance, and pretraining optimization, LET substantially improves training efficiency: on 1.4B and 7B models, it achieves a 1.6× speedup in training convergence for the 1.4B model while boosting downstream task accuracy by nearly 5%, even when the source model possesses only one-tenth the parameters of the target model.
📝 Abstract
As Large Language Models (LLMs) achieve remarkable empirical success through scaling model and data size, pretraining has become increasingly critical yet computationally prohibitive, hindering rapid development. Despite the availability of numerous pretrained LLMs developed at significant computational expense, a fundamental real-world question remains underexplored: \textit{Can we leverage existing small pretrained models to accelerate the training of larger models?} In this paper, we propose a Late-to-Early Training (LET) paradigm that enables LLMs to explicitly learn later knowledge in earlier steps and earlier layers. The core idea is to guide the early layers of an LLM during early training using representations from the late layers of a pretrained (i.e. late training phase) model. We identify two key mechanisms that drive LET's effectiveness: late-to-early-step learning and late-to-early-layer learning. These mechanisms significantly accelerate training convergence while robustly enhancing both language modeling capabilities and downstream task performance, enabling faster training with superior performance. Extensive experiments on 1.4B and 7B parameter models demonstrate LET's efficiency and effectiveness. Notably, when training a 1.4B LLM on the Pile dataset, our method achieves up to 1.6$\times$ speedup with nearly 5\% improvement in downstream task accuracy compared to standard training, even when using a pretrained model with 10$\times$ fewer parameters than the target model.