🤖 AI Summary
This work investigates the intrinsic convergence behavior of the Adam optimizer on highly degenerate polynomial functions without relying on external learning rate scheduling. Through a combination of dynamical systems stability analysis, theoretical proofs, and numerical experiments, it demonstrates for the first time that Adam automatically achieves local linear convergence in such settings, markedly outperforming the sublinear rates of gradient descent and momentum methods. The core contributions include establishing conditions for local asymptotic stability, introducing a decoupling mechanism between the second-moment estimate and the squared gradient, and characterizing a hyperparameter phase diagram whose theoretical boundaries align closely with empirical observations.
📝 Abstract
Adam is a widely used optimization algorithm in deep learning, yet the specific class of objective functions where it exhibits inherent advantages remains underexplored. Unlike prior studies requiring external schedulers and $β_2$ near 1 for convergence, this work investigates the "natural" auto-convergence properties of Adam. We identify a class of highly degenerate polynomials where Adam converges automatically without additional schedulers. Specifically, we derive theoretical conditions for local asymptotic stability on degenerate polynomials and demonstrate strong alignment between theoretical bounds and experimental results. We prove that Adam achieves local linear convergence on these degenerate functions, significantly outperforming the sub-linear convergence of Gradient Descent and Momentum. This acceleration stems from a decoupling mechanism between the second moment $v_t$ and squared gradient $g_t^2$, which exponentially amplifies the effective learning rate. Finally, we characterize Adam's hyperparameter phase diagram, identifying three distinct behavioral regimes: stable convergence, spikes, and SignGD-like oscillation.