🤖 AI Summary
In federated learning, frequent gradient communication between heterogeneous clients and the server constitutes a major bottleneck. To address this, we propose FedComLoc—a novel framework built upon the Scaffnew optimizer that, for the first time, deeply integrates Top-K gradient sparsification with low-bit quantization, augmented by an error compensation mechanism to jointly optimize communication compression and convergence stability. FedComLoc supports multi-step local SGD training across heterogeneous devices, substantially reducing uplink communication overhead. Extensive experiments under realistic heterogeneous settings demonstrate over 70% reduction in communication volume, while maintaining convergence speed and model accuracy comparable to non-compressed baselines. This significantly enhances training efficiency and practical feasibility for edge deployment in federated learning.
📝 Abstract
Federated Learning (FL) has garnered increasing attention due to its unique characteristic of allowing heterogeneous clients to process their private data locally and interact with a central server, while being respectful of privacy. A critical bottleneck in FL is the communication cost. A pivotal strategy to mitigate this burden is emph{Local Training}, which involves running multiple local stochastic gradient descent iterations between communication phases. Our work is inspired by the innovative emph{Scaffnew} algorithm, which has considerably advanced the reduction of communication complexity in FL. We introduce FedComLoc (Federated Compressed and Local Training), integrating practical and effective compression into emph{Scaffnew} to further enhance communication efficiency. Extensive experiments, using the popular TopK compressor and quantization, demonstrate its prowess in substantially reducing communication overheads in heterogeneous settings.