Q-GADMM: Quantized Group ADMM for Communication Efficient Decentralized Machine Learning

📅 2019-10-23
🏛️ IEEE Transactions on Communications
📈 Citations: 61
Influential: 2
📄 PDF
🤖 AI Summary
To address the communication bottleneck in decentralized machine learning, this paper proposes Quantized Group Alternating Direction Method of Multipliers (Q-GADMM), where each node communicates exclusively with two neighbors and transmits quantized model differences to reduce communication overhead. We introduce an adaptive stochastic quantization scheme that dynamically adjusts quantization precision and probability. Under convex objectives, we establish rigorous convergence guarantees; further, we extend the framework to non-convex settings via Q-SGADMM, enabling support for deep neural networks and stochastic gradient updates. Experiments demonstrate that, on linear regression tasks, Q-GADMM achieves substantial communication reduction without sacrificing accuracy or convergence speed. On DNN-based image classification tasks, Q-SGADMM significantly lowers total communication cost compared to SGADMM, while preserving model performance.
📝 Abstract
In this article, we propose a communication-efficient decentralized machine learning (ML) algorithm, coined quantized group ADMM (Q-GADMM). To reduce the number of communication links, every worker in Q-GADMM communicates only with two neighbors, while updating its model via the group alternating direction method of multipliers (GADMM). Moreover, each worker transmits the quantized difference between its current model and its previously quantized model, thereby decreasing the communication payload size. However, due to the lack of centralized entity in decentralized ML, the spatial sparsity and payload compression may incur error propagation, hindering model training convergence. To overcome this, we develop a novel stochastic quantization method to adaptively adjust model quantization levels and their probabilities, while proving the convergence of Q-GADMM for convex objective functions. Furthermore, to demonstrate the feasibility of Q-GADMM for non-convex and stochastic problems, we propose quantized stochastic GADMM (Q-SGADMM) that incorporates deep neural network architectures and stochastic sampling. Simulation results corroborate that Q-GADMM significantly outperforms GADMM in terms of communication efficiency while achieving the same accuracy and convergence speed for a linear regression task. Similarly, for an image classification task using DNN, Q-SGADMM achieves significantly less total communication cost with identical accuracy and convergence speed compared to its counterpart without quantization, i.e., stochastic GADMM (SGADMM).
Problem

Research questions and friction points this paper is trying to address.

Distributed Learning
Communication Efficiency
Error Propagation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decentralized Machine Learning
Q-GADMM Algorithm
Randomized Sketching
🔎 Similar Papers
No similar papers found.
Anis Elgabli
Anis Elgabli
King Fahd university of Petroleum and Minerals
Wireless communicationsArtificial intelligenceand Machine learning
Jihong Park
Jihong Park
Associate Professor, SUTD, SMIEEE
Wireless CommunicationsSemantic CommunicationDistributed Machine LearningAI-RAN
A
A. S. Bedi
Department of Electrical Engineering, IIT Kanpur
M
M. Bennis
Center of Wireless Communication, University of Oulu, Finland
V
V. Aggarwal
School of Industrial Engineering and the School of Electrical and Computer Engineering, Purdue University, USA