🤖 AI Summary
In federated learning, increasing model dimensionality leads to linear growth in communication overhead, severely hindering efficiency and cost-effectiveness. To address this, we propose DeComFL—a novel algorithm that achieves *dimension-independent* per-round communication complexity (O(1)), breaking the conventional linear dependence on parameter count. Methodologically, DeComFL integrates zeroth-order optimization, randomized coordinate sampling, scalar quantization, and client-side gradient approximation. Under standard non-convexity and low effective-rank assumptions, we establish a convergence rate that is also independent of model dimension. Experiments on large language model fine-tuning demonstrate that DeComFL reduces total per-client communication to approximately 1 MB—achieving over an order-of-magnitude reduction—while maintaining state-of-the-art convergence performance.
📝 Abstract
Federated Learning (FL) offers a promising framework for collaborative and privacy-preserving machine learning across distributed data sources. However, the substantial communication costs associated with FL significantly challenge its efficiency. Specifically, in each communication round, the communication costs scale linearly with the model's dimension, which presents a formidable obstacle, especially in large model scenarios. Despite various communication-efficient strategies, the intrinsic dimension-dependent communication cost remains a major bottleneck for current FL implementations. This paper proposes a novel dimension-free communication algorithm - DeComFL, which leverages the zeroth-order optimization techniques and reduces the communication cost from $mathscr{O}(d)$ to $mathscr{O}(1)$ by transmitting only a constant number of scalar values between clients and the server in each round, regardless of the dimension $d$ of the model parameters. Theoretically, in non-convex functions, we prove that our algorithm achieves state-of-the-art rates, which show a linear speedup of the number of clients and local steps under standard assumptions. With additional low effective rank assumption, we can further show the convergence rate is independent of the model dimension $d$ as well. Empirical evaluations, encompassing both classic deep learning training and large language model fine-tuning, demonstrate significant reductions in communication overhead. Notably, DeComFL achieves this by transmitting only around 1MB of data in total between the server and a client to fine-tune a model with billions of parameters. Our code is available at https://github.com/ZidongLiu/DeComFL.