Nesterov acceleration in benignly non-convex landscapes

📅 2024-10-10
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the long-standing gap between the strong empirical performance and weak theoretical understanding of Nesterov’s Accelerated Gradient (NAG) method in nonconvex optimization. Focusing on “benign nonconvex” settings—such as regions satisfying the Polyak–Łojasiewicz (PL) condition or gradient dominance—we develop a unified analytical framework integrating continuous- and discrete-time dynamical systems modeling, stochastic optimization theory, and nonconvex geometric analysis. We establish, for the first time under benign nonconvexity, an optimal $O(1/k^2)$ convergence rate—matching that of strongly convex settings—for both deterministic NAG and its stochastic variants under additive noise, multiplicative noise, and their mixture. Our results provide rigorous geometric and algorithmic foundations for the rapid convergence of NAG in typical local regions of overparameterized deep learning models, thereby bridging a critical theory–practice gap for accelerated methods in nonconvex optimization.

Technology Category

Application Category

📝 Abstract
While momentum-based optimization algorithms are commonly used in the notoriously non-convex optimization problems of deep learning, their analysis has historically been restricted to the convex and strongly convex setting. In this article, we partially close this gap between theory and practice and demonstrate that virtually identical guarantees can be obtained in optimization problems with a `benign' non-convexity. We show that these weaker geometric assumptions are well justified in overparametrized deep learning, at least locally. Variations of this result are obtained for a continuous time model of Nesterov's accelerated gradient descent algorithm (NAG), the classical discrete time version of NAG, and versions of NAG with stochastic gradient estimates with purely additive noise and with noise that exhibits both additive and multiplicative scaling.
Problem

Research questions and friction points this paper is trying to address.

Analyzing Nesterov acceleration in benign non-convex landscapes
Bridging theory-practice gap in non-convex deep learning optimization
Extending guarantees to overparametrized models with stochastic NAG variants
Innovation

Methods, ideas, or system contributions that make the work stand out.

Nesterov acceleration in benign non-convex landscapes
Continuous and discrete time NAG analysis
Stochastic gradient NAG with additive noise
🔎 Similar Papers
No similar papers found.