Theory of Optimal Learning Rate Schedules and Scaling Laws for a Random Feature Model

๐Ÿ“… 2026-02-04
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work proposes a principled, trial-and-error-free method for determining optimal learning rate schedules in deep learning. By leveraging a solvable power-law random feature model and optimal control theory, the authors analytically derive a stage-wise optimal learning rate schedule: it exhibits polynomial decay during an โ€œeasyโ€ learning phase and transitions to a warmup-stable-decayโ€“like profile in a subsequent โ€œhardโ€ phase. This approach uncovers an intrinsic connection between the optimal schedule and the underlying task structure and naturally extends to the joint optimization of momentum and batch size. Experimental results demonstrate that the derived schedule significantly outperforms constant or power-law baselines, achieving faster convergence and accurately predicting the empirically observed compute-optimal scaling laws.

Technology Category

Application Category

๐Ÿ“ Abstract
Setting the learning rate for a deep learning model is a critical part of successful training, yet choosing this hyperparameter is often done empirically with trial and error. In this work, we explore a solvable model of optimal learning rate schedules for a powerlaw random feature model trained with stochastic gradient descent (SGD). We consider the optimal schedule $\eta_T^\star(t)$ where $t$ is the current iterate and $T$ is the total training horizon. This schedule is computed both numerically and analytically (when possible) using optimal control methods. Our analysis reveals two regimes which we term the easy phase and hard phase. In the easy phase the optimal schedule is a polynomial decay $\eta_T^\star(t) \simeq T^{-\xi} (1-t/T)^{\delta}$ where $\xi$ and $\delta$ depend on the properties of the features and task. In the hard phase, the optimal schedule resembles warmup-stable-decay with constant (in $T$) initial learning rate and annealing performed over a vanishing (in $T$) fraction of training steps. We investigate joint optimization of learning rate and batch size, identifying a degenerate optimality condition. Our model also predicts the compute-optimal scaling laws (where model size and training steps are chosen optimally) in both easy and hard regimes. Going beyond SGD, we consider optimal schedules for the momentum $\beta(t)$, where speedups in the hard phase are possible. We compare our optimal schedule to various benchmarks in our task including (1) optimal constant learning rates $\eta_T(t) \sim T^{-\xi}$ (2) optimal power laws $\eta_T(t) \sim T^{-\xi} t^{-\chi}$, finding that our schedule achieves better rates than either of these. Our theory suggests that learning rate transfer across training horizon depends on the structure of the model and task. We explore these ideas in simple experimental pretraining setups.
Problem

Research questions and friction points this paper is trying to address.

optimal learning rate schedule
stochastic gradient descent
scaling laws
random feature model
training horizon
Innovation

Methods, ideas, or system contributions that make the work stand out.

optimal learning rate schedule
random feature model
scaling laws
stochastic gradient descent
optimal control
๐Ÿ”Ž Similar Papers
No similar papers found.
Blake Bordelon
Blake Bordelon
Postdoctoral Fellow at Harvard CMSA
Machine LearningTheoretical Neuroscience
F
Francesco Mori
Center of Mathematical Sciences and Applications, Harvard University, Cambridge, MA, USA