🤖 AI Summary
To address the bidirectional communication bottleneck in stochastic federated learning—particularly the difficulty of compressing statistically distributed model updates, which incurs high communication overhead and degrades accuracy—this paper proposes BICompFL, the first end-to-end bidirectional compression framework. BICompFL jointly models the uncertainty inherent in stochastic parameter updates and co-designs quantization and sparsification mechanisms for both uplink (client-to-server) and downlink (server-to-client) transmissions. Leveraging importance sampling theory, it formally characterizes the coupled relationship between uplink and downlink communication costs—a novel theoretical insight. Convergence is rigorously guaranteed under standard assumptions. Empirical evaluation demonstrates an order-of-magnitude reduction in total communication volume while achieving state-of-the-art accuracy across multiple benchmark datasets and heterogeneous device settings, confirming both effectiveness and robustness.
📝 Abstract
We address the prominent communication bottleneck in federated learning (FL). We specifically consider stochastic FL, in which models or compressed model updates are specified by distributions rather than deterministic parameters. Stochastic FL offers a principled approach to compression, and has been shown to reduce the communication load under perfect downlink transmission from the federator to the clients. However, in practice, both the uplink and downlink communications are constrained. We show that bi-directional compression for stochastic FL has inherent challenges, which we address by introducing BICompFL. Our BICompFL is experimentally shown to reduce the communication cost by an order of magnitude compared to multiple benchmarks, while maintaining state-of-the-art accuracies. Theoretically, we study the communication cost of BICompFL through a new analysis of an importance-sampling based technique, which exposes the interplay between uplink and downlink communication costs.