🤖 AI Summary
Learning rate warmup empirically accelerates large-model training, yet its theoretical underpinnings remain poorly understood. This paper bridges this gap by introducing a novel class of generalized smoothness assumptions grounded in optimization theory, enabling the first rigorous characterization of how warmup mitigates gradient mismatch during early training stages. We prove that, under these assumptions, warmup-equipped gradient descent (GD) and stochastic gradient descent (SGD) achieve convergence rates accelerated by a factor of Θ(T) over their non-warmup counterparts—substantially improving upon classical O(1/√T) or O(1/T) bounds. Our analysis comprehensively covers both deterministic and stochastic optimization settings. Extensive experiments on canonical neural networks validate the theoretical predictions. By establishing a principled analytical framework linking warmup to gradient alignment and curvature adaptation, this work resolves the longstanding disconnect between the empirical success and theoretical opacity of warmup, offering a new paradigm for designing adaptive learning rate schedules.
📝 Abstract
Learning rate warmup is a popular and practical technique in training large-scale deep neural networks. Despite the huge success in practice, the theoretical advantages of this strategy of gradually increasing the learning rate at the beginning of the training process have not been fully understood. To resolve this gap between theory and practice, we first propose a novel family of generalized smoothness assumptions, and validate its applicability both theoretically and empirically. Under the novel smoothness assumption, we study the convergence properties of gradient descent (GD) in both deterministic and stochastic settings. It is shown that learning rate warmup consistently accelerates GD, and GD with warmup can converge at most $Θ(T)$ times faster than with a non-increasing learning rate schedule in some specific cases, providing insights into the benefits of this strategy from an optimization theory perspective.