First Provable Guarantees for Practical Private FL: Beyond Restrictive Assumptions

📅 2025-12-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing differentially private (DP) federated learning (FL) methods rely on unrealistic assumptions—such as bounded gradients and homogeneous data—and struggle to accommodate practical constraints like multi-step local updates and partial client participation. To address this, we propose Fed-α-NormEC, a novel DP-FL framework operating under standard, realistic assumptions: no gradient boundedness requirement, support for non-convex objectives and heterogeneous data. Fed-α-NormEC is the first method to provably achieve both $O(1/sqrt{T})$ convergence and $(varepsilon,delta)$-DP guarantees in settings with multi-step local training and partial client participation. Its design integrates α-norm-based gradient clipping, decoupled client- and server-side step sizes, privacy amplification via subsampling, and incremental local optimization. Extensive experiments on private image and text classification tasks demonstrate that Fed-α-NormEC significantly outperforms state-of-the-art baselines in utility–privacy trade-offs.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) enables collaborative training on decentralized data. Differential privacy (DP) is crucial for FL, but current private methods often rely on unrealistic assumptions (e.g., bounded gradients or heterogeneity), hindering practical application. Existing works that relax these assumptions typically neglect practical FL features, including multiple local updates and partial client participation. We introduce Fed-$α$-NormEC, the first differentially private FL framework providing provable convergence and DP guarantees under standard assumptions while fully supporting these practical features. Fed-$α$-NormE integrates local updates (full and incremental gradient steps), separate server and client stepsizes, and, crucially, partial client participation, which is essential for real-world deployment and vital for privacy amplification. Our theoretical guarantees are corroborated by experiments on private deep learning tasks.
Problem

Research questions and friction points this paper is trying to address.

Providing private FL with provable guarantees under standard assumptions
Supporting practical features like multiple local updates and partial participation
Overcoming unrealistic assumptions in existing private FL methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Private FL framework with provable convergence guarantees
Supports multiple local updates and partial client participation
Uses separate server and client stepsizes for efficiency
🔎 Similar Papers
No similar papers found.