🤖 AI Summary
This work addresses the long-standing theoretical gap between Adam and SGD in deep learning, where Adam lacked convergence guarantees comparable to those of SGD under standard assumptions. We establish, for the first time, a unified convergence analysis framework for Adam under the canonical SGD assumptions—namely, L-smoothness and the ABC inequality. Methodologically, we remove the restrictive bounded-gradient assumption by integrating stochastic optimization theory, martingale convergence analysis, Lyapunov function construction, and ABC techniques. Our key contributions are: (1) rigorous proofs of almost-sure convergence and L₁ convergence of Adam; (2) derivation of a non-asymptotic sample complexity bound of O(1/√T), matching SGD exactly; and (3) the first unified analysis accommodating last-iterate convergence, almost-sure convergence, and non-asymptotic bounds simultaneously. These results demonstrate that Adam achieves theoretical convergence guarantees on par with SGD, substantially enhancing its credibility and practical applicability.
📝 Abstract
Adaptive Moment Estimation (Adam) is a cornerstone optimization algorithm in deep learning, widely recognized for its flexibility with adaptive learning rates and efficiency in handling large-scale data. However, despite its practical success, the theoretical understanding of Adam's convergence has been constrained by stringent assumptions, such as almost surely bounded stochastic gradients or uniformly bounded gradients, which are more restrictive than those typically required for analyzing stochastic gradient descent (SGD). In this paper, we introduce a novel and comprehensive framework for analyzing the convergence properties of Adam. This framework offers a versatile approach to establishing Adam's convergence. Specifically, we prove that Adam achieves asymptotic (last iterate sense) convergence in both the almost sure sense and the (L_1) sense under the relaxed assumptions typically used for SGD, namely (L)-smoothness and the ABC inequality. Meanwhile, under the same assumptions, we show that Adam attains non-asymptotic sample complexity bounds similar to those of SGD.