Compressed Decentralized Momentum Stochastic Gradient Methods for Nonconvex Optimization

📅 2025-08-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Decentralized training for non-convex stochastic optimization suffers from communication bottlenecks and performance degradation under data heterogeneity and unbounded gradients. Method: This paper proposes two compression-based decentralized algorithms that unify momentum acceleration, gradient tracking, and adaptive compression. It is the first to integrate adaptive step sizes with compressed communication in decentralized settings and introduces a heavy-ball–style gradient tracking mechanism to handle data heterogeneity and unbounded gradients. Contribution/Results: The framework achieves topology-agnostic optimal convergence rate of $mathcal{O}(1/T)$ with linear speedup, while jointly mitigating consensus error, compression bias, and momentum accumulation error—ensuring strong theoretical robustness. Experiments on deep neural networks and Transformer models demonstrate significant improvements over state-of-the-art methods, achieving superior trade-offs between convergence speed and communication cost.

Technology Category

Application Category

📝 Abstract
In this paper, we design two compressed decentralized algorithms for solving nonconvex stochastic optimization under two different scenarios. Both algorithms adopt a momentum technique to achieve fast convergence and a message-compression technique to save communication costs. Though momentum acceleration and compressed communication have been used in literature, it is highly nontrivial to theoretically prove the effectiveness of their composition in a decentralized algorithm that can maintain the benefits of both sides, because of the need to simultaneously control the consensus error, the compression error, and the bias from the momentum gradient. For the scenario where gradients are bounded, our proposal is a compressed decentralized adaptive method. To the best of our knowledge, this is the first decentralized adaptive stochastic gradient method with compressed communication. For the scenario of data heterogeneity without bounded gradients, our proposal is a compressed decentralized heavy-ball method, which applies a gradient tracking technique to address the challenge of data heterogeneity. Notably, both methods achieve an optimal convergence rate, and they can achieve linear speed up and adopt topology-independent algorithmic parameters within a certain regime of the user-specified error tolerance. Superior empirical performance is observed over state-of-the-art methods on training deep neural networks (DNNs) and Transformers.
Problem

Research questions and friction points this paper is trying to address.

Develop compressed decentralized algorithms for nonconvex stochastic optimization
Achieve fast convergence with momentum and reduced communication costs
Address data heterogeneity and bounded gradient scenarios effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compressed decentralized momentum for fast convergence
Message-compression to reduce communication costs
Gradient tracking for data heterogeneity scenarios
🔎 Similar Papers
No similar papers found.
W
Wei Liu
Department of Mathematical Sciences, Rensselaer Polytechnic Institute
A
Anweshit Panda
Department of Computer Science, Rensselaer Polytechnic Institute
U
Ujwal Pandey
Department of Computer Science, Rensselaer Polytechnic Institute
C
Christopher Brissette
Department of Computer Science, Rensselaer Polytechnic Institute
Yikang Shen
Yikang Shen
xAI
Deep LearningNatural Language Processing
George M. Slota
George M. Slota
Associate Professor, Rensselaer Polytechnic Institute
graph algorithmshigh performance computingcombinatorial scientific computing
Naigang Wang
Naigang Wang
IBM T. J. Watson Research Center (nwang@us.ibm.com)
Deep learningAI acceleratoron-chip power converteron-chip inductor/transformerMEMS transducers
J
Jie Chen
MIT-IBM Watson AI Lab, IBM Research
Y
Yangyang Xu
Department of Mathematical Sciences, Rensselaer Polytechnic Institute