Enhancing Optimizer Stability: Momentum Adaptation of The NGN Step-size

📅 2025-08-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Deep learning optimizers often suffer from poor robustness due to high sensitivity to step-size hyperparameters, leading to costly and labor-intensive tuning. To address this, we propose NGN-M—a novel optimizer that integrates momentum with the NGN step-size adaptation scheme, the first such method requiring neither interpolation conditions nor bounded gradient assumptions. Within a nonconvex stochastic optimization framework, NGN-M achieves an $O(1/sqrt{K})$ convergence rate. Theoretically, it exhibits strong robustness to step-size selection, substantially alleviating hyperparameter dependency. Empirically, NGN-M matches or surpasses state-of-the-art optimizers—including Adam and SGD with momentum—across diverse tasks (e.g., image classification, language modeling) and model architectures. Crucially, it maintains stable convergence over a wide range of step sizes, demonstrating both theoretical rigor and practical effectiveness.

Technology Category

Application Category

📝 Abstract
Modern optimization algorithms that incorporate momentum and adaptive step-size offer improved performance in numerous challenging deep learning tasks. However, their effectiveness is often highly sensitive to the choice of hyperparameters, especially the step-size. Tuning these parameters is often difficult, resource-intensive, and time-consuming. Therefore, recent efforts have been directed toward enhancing the stability of optimizers across a wide range of hyperparameter choices [Schaipp et al., 2024]. In this paper, we introduce an algorithm that matches the performance of state-of-the-art optimizers while improving stability to the choice of the step-size hyperparameter through a novel adaptation of the NGN step-size method [Orvieto and Xiao, 2024]. Specifically, we propose a momentum-based version (NGN-M) that attains the standard convergence rate of $mathcal{O}(1/sqrt{K})$ under less restrictive assumptions, without the need for interpolation condition or assumptions of bounded stochastic gradients or iterates, in contrast to previous approaches. Additionally, we empirically demonstrate that the combination of the NGN step-size with momentum results in enhanced robustness to the choice of the step-size hyperparameter while delivering performance that is comparable to or surpasses other state-of-the-art optimizers.
Problem

Research questions and friction points this paper is trying to address.

Improving optimizer stability to hyperparameter sensitivity
Reducing step-size tuning difficulty in deep learning
Enhancing convergence without restrictive gradient assumptions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Momentum-based NGN step-size adaptation
Enhanced robustness to hyperparameter choices
Standard convergence rate under relaxed assumptions