🤖 AI Summary
In large-scale pretraining, learning rate scheduling critically influences both training efficiency and model performance. This work proposes two paradigms—Fitting and Transfer. The Fitting paradigm establishes, for the first time, a scaling law for learning rate search factors, reducing hyperparameter tuning complexity from O(n³) to O(n·C_D·C_η). The Transfer paradigm extends μTransfer to Mixture-of-Experts (MoE) architectures and generalizes it across multiple hyperparameter dimensions, including depth, weight decay, and token length. Empirical results demonstrate that while μTransfer exhibits limited scalability in large-scale settings, the Fitting paradigm—grounded in the derived scaling law—offers superior scalability and practicality, providing a systematic guideline for hyperparameter tuning in industrial-scale pretraining.
📝 Abstract
Optimal configuration of the learning rate (LR) is a fundamental yet formidable challenge in large-scale pre-training. Given the stringent trade-off between training costs and model performance, the pivotal question is whether the optimal LR can be accurately extrapolated from low-cost experiments. In this paper, we formalize this investigation into two distinct research paradigms: Fitting and Transfer. Within the Fitting Paradigm, we innovatively introduce a Scaling Law for search factor, effectively reducing the search complexity from O(n^3) to O(n*C_D*C_{\eta}) via predictive modeling. Within the Transfer Paradigm, we extend the principles of $\mu$Transfer to the Mixture of Experts (MoE) architecture, broadening its applicability to encompass model depth, weight decay, and token horizons. By pushing the boundaries of existing hyperparameter research in terms of scale, we conduct a comprehensive comparison between these two paradigms. Our empirical results challenge the scalability of the widely adopted $\mu$ Transfer in large-scale pre-training scenarios. Furthermore, we provide a rigorous analysis through the dual lenses of training stability and feature learning to elucidate the underlying reasons why module-wise parameter tuning underperforms in large-scale settings. This work offers systematic practical guidelines and a fresh theoretical perspective for optimizing industrial-level pre-training.