🤖 AI Summary
The theoretical convergence analysis of Adam has long relied on strong assumptions and intricate technical machinery, resulting in lengthy proofs with poor scalability. Method: This paper fundamentally reinterprets Adam as a *symbolic optimizer*, departing from the conventional preconditioned stochastic gradient descent with momentum (SGDM) perspective, and establishes a concise, unified convergence framework grounded in symbolic descent directions, generalized *p*-affine variance, and *(L₀, L₁, q)*-smoothness. Contribution/Results: Under mild, dimension-free assumptions—without requiring dimension-dependent constants or numerical stabilization parameters—we prove, for the first time, that Adam achieves the optimal convergence rate of *O*(1/*T*^{1/4}), improving upon prior bounds of *O*(ln *T* / *T*^{1/4}). Our analysis explicitly reveals the critical role of the momentum term in convergence and provides principled guidance for learning rate tuning, thereby substantially bridging the gap between theory and practice.
📝 Abstract
Adam is widely recognized as one of the most effective optimizers for training deep neural networks (DNNs). Despite its remarkable empirical success, its theoretical convergence analysis remains unsatisfactory. Existing works predominantly interpret Adam as a preconditioned stochastic gradient descent with momentum (SGDM), formulated as $m{x}_{t+1} = m{x}_t - frac{γ_t}{{sqrt{m{v}_t}+ε}} circ m{m}_t$. This perspective necessitates strong assumptions and intricate techniques, resulting in lengthy and opaque convergence proofs that are difficult to verify and extend. In contrast, we propose a novel interpretation by treating Adam as a sign-like optimizer, expressed as $m{x}_{t+1} = m{x}_t - γ_t frac{|m{m}_t|}{{sqrt{m{v}_t}+ε}} circ {
m Sign}(m{m}_t)$. This reformulation significantly simplifies the convergence analysis. For the first time, with some mild conditions, we prove that Adam achieves the optimal rate of ${cal O}(frac{1}{T^{sfrac{1}{4}}})$ rather than the previous ${cal O} left(frac{ln T}{T^{sfrac{1}{4}}}
ight)$ under weak assumptions of the generalized $p$-affine variance and $(L_0, L_1, q)$-smoothness, without dependence on the model dimensionality or the numerical stability parameter $ε$. Additionally, our theoretical analysis provides new insights into the role of momentum as a key factor ensuring convergence and offers practical guidelines for tuning learning rates in Adam, further bridging the gap between theory and practice.