🤖 AI Summary
Existing differentially private (DP) federated learning (FL) methods rely on unrealistic assumptions—such as bounded gradients and homogeneous data—and struggle to accommodate practical constraints like multi-step local updates and partial client participation. To address this, we propose Fed-α-NormEC, a novel DP-FL framework operating under standard, realistic assumptions: no gradient boundedness requirement, support for non-convex objectives and heterogeneous data. Fed-α-NormEC is the first method to provably achieve both $O(1/sqrt{T})$ convergence and $(varepsilon,delta)$-DP guarantees in settings with multi-step local training and partial client participation. Its design integrates α-norm-based gradient clipping, decoupled client- and server-side step sizes, privacy amplification via subsampling, and incremental local optimization. Extensive experiments on private image and text classification tasks demonstrate that Fed-α-NormEC significantly outperforms state-of-the-art baselines in utility–privacy trade-offs.
📝 Abstract
Federated Learning (FL) enables collaborative training on decentralized data. Differential privacy (DP) is crucial for FL, but current private methods often rely on unrealistic assumptions (e.g., bounded gradients or heterogeneity), hindering practical application. Existing works that relax these assumptions typically neglect practical FL features, including multiple local updates and partial client participation. We introduce Fed-$α$-NormEC, the first differentially private FL framework providing provable convergence and DP guarantees under standard assumptions while fully supporting these practical features. Fed-$α$-NormE integrates local updates (full and incremental gradient steps), separate server and client stepsizes, and, crucially, partial client participation, which is essential for real-world deployment and vital for privacy amplification. Our theoretical guarantees are corroborated by experiments on private deep learning tasks.