🤖 AI Summary
This work addresses the long-standing gap between the strong empirical performance and weak theoretical understanding of Nesterov’s Accelerated Gradient (NAG) method in nonconvex optimization. Focusing on “benign nonconvex” settings—such as regions satisfying the Polyak–Łojasiewicz (PL) condition or gradient dominance—we develop a unified analytical framework integrating continuous- and discrete-time dynamical systems modeling, stochastic optimization theory, and nonconvex geometric analysis. We establish, for the first time under benign nonconvexity, an optimal $O(1/k^2)$ convergence rate—matching that of strongly convex settings—for both deterministic NAG and its stochastic variants under additive noise, multiplicative noise, and their mixture. Our results provide rigorous geometric and algorithmic foundations for the rapid convergence of NAG in typical local regions of overparameterized deep learning models, thereby bridging a critical theory–practice gap for accelerated methods in nonconvex optimization.
📝 Abstract
While momentum-based optimization algorithms are commonly used in the notoriously non-convex optimization problems of deep learning, their analysis has historically been restricted to the convex and strongly convex setting. In this article, we partially close this gap between theory and practice and demonstrate that virtually identical guarantees can be obtained in optimization problems with a `benign' non-convexity. We show that these weaker geometric assumptions are well justified in overparametrized deep learning, at least locally. Variations of this result are obtained for a continuous time model of Nesterov's accelerated gradient descent algorithm (NAG), the classical discrete time version of NAG, and versions of NAG with stochastic gradient estimates with purely additive noise and with noise that exhibits both additive and multiplicative scaling.