🤖 AI Summary
To address the challenge of simultaneously achieving differential privacy (DP) and high convergence efficiency in federated learning (FL) under partial client participation, this paper proposes a novel gradient perturbation mechanism based on noise cancellation. Within the stochastic convex optimization (SCO) framework, our approach is the first to achieve strict DP guarantees alongside the optimal convergence rate of $O(1/sqrt{T})$ in the partial-participation setting—matching the rate of non-private, full-participation baselines. By jointly designing variance-reduction and noise-compensation strategies, we effectively eliminate the excess variance induced by participation sparsity, significantly outperforming existing DP-FL methods. The framework accommodates both heterogeneous and homogeneous data distributions, requires no additional communication or computational overhead, and achieves theoretical optimality while remaining practically deployable.
📝 Abstract
This paper tackles the challenge of achieving Differential Privacy (DP) in Federated Learning (FL) under partial-participation, where only a subset of the machines participate in each time-step. While previous work achieved optimal performance in full-participation settings, these methods struggled to extend to partial-participation scenarios. Our approach fills this gap by introducing a novel noise-cancellation mechanism that preserves privacy without sacrificing convergence rates or computational efficiency. We analyze our method within the Stochastic Convex Optimization (SCO) framework and show that it delivers optimal performance for both homogeneous and heterogeneous data distributions. This work expands the applicability of DP in FL, offering an efficient and practical solution for privacy-preserving learning in distributed systems with partial participation.