๐ค AI Summary
This work proposes a principled, trial-and-error-free method for determining optimal learning rate schedules in deep learning. By leveraging a solvable power-law random feature model and optimal control theory, the authors analytically derive a stage-wise optimal learning rate schedule: it exhibits polynomial decay during an โeasyโ learning phase and transitions to a warmup-stable-decayโlike profile in a subsequent โhardโ phase. This approach uncovers an intrinsic connection between the optimal schedule and the underlying task structure and naturally extends to the joint optimization of momentum and batch size. Experimental results demonstrate that the derived schedule significantly outperforms constant or power-law baselines, achieving faster convergence and accurately predicting the empirically observed compute-optimal scaling laws.
๐ Abstract
Setting the learning rate for a deep learning model is a critical part of successful training, yet choosing this hyperparameter is often done empirically with trial and error. In this work, we explore a solvable model of optimal learning rate schedules for a powerlaw random feature model trained with stochastic gradient descent (SGD). We consider the optimal schedule $\eta_T^\star(t)$ where $t$ is the current iterate and $T$ is the total training horizon. This schedule is computed both numerically and analytically (when possible) using optimal control methods. Our analysis reveals two regimes which we term the easy phase and hard phase. In the easy phase the optimal schedule is a polynomial decay $\eta_T^\star(t) \simeq T^{-\xi} (1-t/T)^{\delta}$ where $\xi$ and $\delta$ depend on the properties of the features and task. In the hard phase, the optimal schedule resembles warmup-stable-decay with constant (in $T$) initial learning rate and annealing performed over a vanishing (in $T$) fraction of training steps. We investigate joint optimization of learning rate and batch size, identifying a degenerate optimality condition. Our model also predicts the compute-optimal scaling laws (where model size and training steps are chosen optimally) in both easy and hard regimes. Going beyond SGD, we consider optimal schedules for the momentum $\beta(t)$, where speedups in the hard phase are possible. We compare our optimal schedule to various benchmarks in our task including (1) optimal constant learning rates $\eta_T(t) \sim T^{-\xi}$ (2) optimal power laws $\eta_T(t) \sim T^{-\xi} t^{-\chi}$, finding that our schedule achieves better rates than either of these. Our theory suggests that learning rate transfer across training horizon depends on the structure of the model and task. We explore these ideas in simple experimental pretraining setups.