Improved Analysis for Sign-based Methods with Momentum Updates

📅 2025-07-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing sign-based optimization methods achieve only $mathcal{O}(T^{-1/4})$ convergence under separable smoothness and rely on large batch sizes or unimodal symmetric noise assumptions. To address these limitations, this paper proposes a novel momentum-enhanced sign stochastic gradient descent (signSGD) framework. The method attains $mathcal{O}(T^{-1/4})$ convergence under standard $L_2$-smoothness with only constant batch size. In distributed settings, it incorporates sign-based momentum and majority-voting aggregation, improving the theoretical convergence rate by a factor of $mathcal{O}(d^{1/2})$, substantially outperforming prior results. Empirical evaluations confirm its dual advantages: superior communication efficiency and faster convergence across diverse benchmarks.

Technology Category

Application Category

📝 Abstract
In this paper, we present enhanced analysis for sign-based optimization algorithms with momentum updates. Traditional sign-based methods, under the separable smoothness assumption, guarantee a convergence rate of $mathcal{O}(T^{-1/4})$, but they either require large batch sizes or assume unimodal symmetric stochastic noise. To address these limitations, we demonstrate that signSGD with momentum can achieve the same convergence rate using constant batch sizes without additional assumptions. Our analysis, under the standard $l_2$-smoothness condition, improves upon the result of the prior momentum-based signSGD method by a factor of $mathcal{O}(d^{1/2})$, where $d$ is the problem dimension. Furthermore, we explore sign-based methods with majority vote in distributed settings and show that the proposed momentum-based method yields convergence rates of $mathcal{O}left( d^{1/2}T^{-1/2} + dn^{-1/2} ight)$ and $mathcal{O}left( max { d^{1/4}T^{-1/4}, d^{1/10}T^{-1/5} } ight)$, which outperform the previous results of $mathcal{O}left( dT^{-1/4} + dn^{-1/2} ight)$ and $mathcal{O}left( d^{3/8}T^{-1/8} ight)$, respectively. Numerical experiments further validate the effectiveness of the proposed methods.
Problem

Research questions and friction points this paper is trying to address.

Enhance sign-based optimization with momentum updates
Achieve convergence with constant batch sizes
Improve distributed sign-based methods' performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

SignSGD with momentum uses constant batch sizes
Improved convergence rate under l2-smoothness condition
Momentum-based method enhances distributed majority vote
🔎 Similar Papers
No similar papers found.
W
Wei Jiang
National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
Dingzhi Yu
Dingzhi Yu
Nanjing University
Machine LearningStochastic OptimizationOnline Learning
Sifan Yang
Sifan Yang
Nanjing University
machine learningoptimization
W
Wenhao Yang
National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
L
Lijun Zhang
National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China