Smoothed Normalization for Efficient Distributed Private Optimization

📅 2025-02-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of theoretical convergence guarantees for distributed differentially private (DP) algorithms in smooth nonconvex federated learning. We propose α-NormEC, a novel DP optimization method that replaces conventional gradient clipping with smooth normalization, thereby eliminating the strong assumption of bounded gradients. By integrating an error-feedback mechanism, α-NormEC compensates for bias induced by privacy-preserving noise and achieves rigorous DP guarantees within a distributed stochastic gradient framework. Our theoretical analysis establishes, for the first time, an $O(1/sqrt{T})$ convergence rate for distributed DP optimization under smooth nonconvex objectives—improving upon the best-known rates of prior DP distributed methods. Extensive experiments on neural networks demonstrate α-NormEC’s robust convergence across diverse hyperparameter configurations and its practical efficacy in real-world federated settings.

Technology Category

Application Category

📝 Abstract
Federated learning enables training machine learning models while preserving the privacy of participants. Surprisingly, there is no differentially private distributed method for smooth, non-convex optimization problems. The reason is that standard privacy techniques require bounding the participants' contributions, usually enforced via $ extit{clipping}$ of the updates. Existing literature typically ignores the effect of clipping by assuming the boundedness of gradient norms or analyzes distributed algorithms with clipping but ignores DP constraints. In this work, we study an alternative approach via $ extit{smoothed normalization}$ of the updates motivated by its favorable performance in the single-node setting. By integrating smoothed normalization with an error-feedback mechanism, we design a new distributed algorithm $alpha$-$sf NormEC$. We prove that our method achieves a superior convergence rate over prior works. By extending $alpha$-$sf NormEC$ to the DP setting, we obtain the first differentially private distributed optimization algorithm with provable convergence guarantees. Finally, our empirical results from neural network training indicate robust convergence of $alpha$-$sf NormEC$ across different parameter settings.
Problem

Research questions and friction points this paper is trying to address.

Federated learning privacy challenges
Differentially private distributed optimization
Smoothed normalization for convergence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Smoothed normalization integrates error-feedback
α-NormEC achieves superior convergence rate
First DP distributed optimization with guarantees
🔎 Similar Papers
No similar papers found.