Stochastic Polyak Step-sizes and Momentum: Convergence Guarantees and Practical Performance

📅 2024-06-06
🏛️ arXiv.org
📈 Citations: 3
Influential: 1
📄 PDF
🤖 AI Summary
Stochastic Heavy Ball (SHB) methods suffer from costly and inefficient hyperparameter tuning, particularly for step sizes. Method: We propose three novel Polyak-type adaptive step-size schemes—MomSPS, MomDecSPS, and MomAdaSPS—grounded in an Iterative Moving Average (IMA) perspective of SHB. These integrate the stochastic Polyak principle with momentum dynamics, enabling rigorous convergence analysis under both non-convex and convex settings without interpolation assumptions or prior knowledge of problem parameters. Contribution/Results: Our approach yields the first adaptive step-size rules for SHB that guarantee exact convergence to minimizers—without interpolation assumptions or problem-specific constants. MomSPS_max achieves neighborhood convergence for non-interpolated convex smooth problems and recovers deterministic Heavy Ball rates under interpolation. Theoretical analysis delivers tight convergence bounds, subsuming SGD-Polyak as a special case. Experiments demonstrate exceptional robustness to learning rate and momentum choices, rapid convergence, and end-to-end hyperparameter-free optimization.

Technology Category

Application Category

📝 Abstract
Stochastic gradient descent with momentum, also known as Stochastic Heavy Ball method (SHB), is one of the most popular algorithms for solving large-scale stochastic optimization problems in various machine learning tasks. In practical scenarios, tuning the step-size and momentum parameters of the method is a prohibitively expensive and time-consuming process. In this work, inspired by the recent advantages of stochastic Polyak step-size in the performance of stochastic gradient descent (SGD), we propose and explore new Polyak-type variants suitable for the update rule of the SHB method. In particular, using the Iterate Moving Average (IMA) viewpoint of SHB, we propose and analyze three novel step-size selections: MomSPS$_{max}$, MomDecSPS, and MomAdaSPS. For MomSPS$_{max}$, we provide convergence guarantees for SHB to a neighborhood of the solution for convex and smooth problems (without assuming interpolation). If interpolation is also satisfied, then using MomSPS$_{max}$, SHB converges to the true solution at a fast rate matching the deterministic HB. The other two variants, MomDecSPS and MomAdaSPS, are the first adaptive step-size for SHB that guarantee convergence to the exact minimizer - without a priori knowledge of the problem parameters and without assuming interpolation. Our convergence analysis of SHB is tight and obtains the convergence guarantees of stochastic Polyak step-size for SGD as a special case. We supplement our analysis with experiments validating our theory and demonstrating the effectiveness and robustness of our algorithms.
Problem

Research questions and friction points this paper is trying to address.

Proposes Polyak-type step-sizes for Stochastic Heavy Ball method.
Provides convergence guarantees for convex and smooth problems.
Introduces adaptive step-sizes ensuring convergence without prior knowledge.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposed Polyak-type variants for SHB method
Introduced adaptive step-sizes MomDecSPS and MomAdaSPS
Provided tight convergence guarantees for SHB
🔎 Similar Papers
No similar papers found.