FedComLoc: Communication-Efficient Distributed Training of Sparse and Quantized Models

📅 2024-03-14
🏛️ arXiv.org
📈 Citations: 6
Influential: 0
📄 PDF
🤖 AI Summary
In federated learning, frequent gradient communication between heterogeneous clients and the server constitutes a major bottleneck. To address this, we propose FedComLoc—a novel framework built upon the Scaffnew optimizer that, for the first time, deeply integrates Top-K gradient sparsification with low-bit quantization, augmented by an error compensation mechanism to jointly optimize communication compression and convergence stability. FedComLoc supports multi-step local SGD training across heterogeneous devices, substantially reducing uplink communication overhead. Extensive experiments under realistic heterogeneous settings demonstrate over 70% reduction in communication volume, while maintaining convergence speed and model accuracy comparable to non-compressed baselines. This significantly enhances training efficiency and practical feasibility for edge deployment in federated learning.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) has garnered increasing attention due to its unique characteristic of allowing heterogeneous clients to process their private data locally and interact with a central server, while being respectful of privacy. A critical bottleneck in FL is the communication cost. A pivotal strategy to mitigate this burden is emph{Local Training}, which involves running multiple local stochastic gradient descent iterations between communication phases. Our work is inspired by the innovative emph{Scaffnew} algorithm, which has considerably advanced the reduction of communication complexity in FL. We introduce FedComLoc (Federated Compressed and Local Training), integrating practical and effective compression into emph{Scaffnew} to further enhance communication efficiency. Extensive experiments, using the popular TopK compressor and quantization, demonstrate its prowess in substantially reducing communication overheads in heterogeneous settings.
Problem

Research questions and friction points this paper is trying to address.

Reduce communication costs in federated learning
Integrate compression with local training strategies
Enhance efficiency in heterogeneous client settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrating compression into Scaffnew algorithm
Using TopK compressor and quantization techniques
Reducing communication overhead in federated learning
🔎 Similar Papers
No similar papers found.
K
Kai Yi
Computer Science Program, CEMSE Division, King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Kingdom of Saudi Arabia
G
Georg Meinhardt
Computer Science Program, CEMSE Division, King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Kingdom of Saudi Arabia
Laurent Condat
Laurent Condat
Senior Research Scientist, King Abdullah University of Science and Technology (KAUST), Saudi Arabia
optimizationconvex optimizationnonsmooth optimizationfederated learningsignal and image processing
P
Peter Richt'arik
Computer Science Program, CEMSE Division, King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Kingdom of Saudi Arabia