Towards Understanding Adam Convergence on Highly Degenerate Polynomials

📅 2026-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the intrinsic convergence behavior of the Adam optimizer on highly degenerate polynomial functions without relying on external learning rate scheduling. Through a combination of dynamical systems stability analysis, theoretical proofs, and numerical experiments, it demonstrates for the first time that Adam automatically achieves local linear convergence in such settings, markedly outperforming the sublinear rates of gradient descent and momentum methods. The core contributions include establishing conditions for local asymptotic stability, introducing a decoupling mechanism between the second-moment estimate and the squared gradient, and characterizing a hyperparameter phase diagram whose theoretical boundaries align closely with empirical observations.

Technology Category

Application Category

📝 Abstract
Adam is a widely used optimization algorithm in deep learning, yet the specific class of objective functions where it exhibits inherent advantages remains underexplored. Unlike prior studies requiring external schedulers and $β_2$ near 1 for convergence, this work investigates the "natural" auto-convergence properties of Adam. We identify a class of highly degenerate polynomials where Adam converges automatically without additional schedulers. Specifically, we derive theoretical conditions for local asymptotic stability on degenerate polynomials and demonstrate strong alignment between theoretical bounds and experimental results. We prove that Adam achieves local linear convergence on these degenerate functions, significantly outperforming the sub-linear convergence of Gradient Descent and Momentum. This acceleration stems from a decoupling mechanism between the second moment $v_t$ and squared gradient $g_t^2$, which exponentially amplifies the effective learning rate. Finally, we characterize Adam's hyperparameter phase diagram, identifying three distinct behavioral regimes: stable convergence, spikes, and SignGD-like oscillation.
Problem

Research questions and friction points this paper is trying to address.

Adam
convergence
degenerate polynomials
optimization
deep learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adam optimizer
degenerate polynomials
auto-convergence
local linear convergence
hyperparameter phase diagram
🔎 Similar Papers
No similar papers found.
Zhiwei Bai
Zhiwei Bai
Shanghai Jiao Tong University
Machine Learning;Deep Learning
J
Jiajie Zhao
Institute of Natural Sciences, School of Mathematical Sciences, Shanghai Jiao Tong University
Z
Zhangchen Zhou
Institute of Natural Sciences, School of Mathematical Sciences, Shanghai Jiao Tong University
Z
Zhi-Qin John Xu
Institute of Natural Sciences, School of Mathematical Sciences, Shanghai Jiao Tong University; MOE-LSC, School of Artificial Intelligence, Shanghai Jiao Tong University
Yaoyu Zhang
Yaoyu Zhang
Shanghai Jiao Tong University
Deep Learning Theory