Optimal Strategies for Federated Learning Maintaining Client Privacy

📅 2025-01-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the tripartite trade-off among privacy preservation, model utility, and communication cost in differentially private federated learning. We propose and theoretically prove that, under a fixed differential privacy budget, performing only one local training epoch per communication round—i.e., a single SGD update before each global aggregation—achieves Pareto-optimality between utility and privacy. Furthermore, we establish that increasing the number of clients monotonically improves model utility without relaxing the privacy budget. Our method builds upon the DP-SGD framework and integrates rigorous convergence analysis with a joint privacy–utility–communication model. Experiments on real-world datasets demonstrate that the single-epoch strategy improves test accuracy by 2.3–5.1% on average over multi-epoch baselines, reduces communication overhead by 40%, and exhibits strictly increasing utility with client count.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) emerged as a learning method to enable the server to train models over data distributed among various clients. These clients are protective about their data being leaked to the server, any other client, or an external adversary, and hence, locally train the model and share it with the server rather than sharing the data. The introduction of sophisticated inferencing attacks enabled the leakage of information about data through access to model parameters. To tackle this challenge, privacy-preserving federated learning aims to achieve differential privacy through learning algorithms like DP-SGD. However, such methods involve adding noise to the model, data, or gradients, reducing the model's performance. This work provides a theoretical analysis of the tradeoff between model performance and communication complexity of the FL system. We formally prove that training for one local epoch per global round of training gives optimal performance while preserving the same privacy budget. We also investigate the change of utility (tied to privacy) of FL models with a change in the number of clients and observe that when clients are training using DP-SGD and argue that for the same privacy budget, the utility improved with increased clients. We validate our findings through experiments on real-world datasets. The results from this paper aim to improve the performance of privacy-preserving federated learning systems.
Problem

Research questions and friction points this paper is trying to address.

Federated Learning
Differential Privacy
Model Performance Optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Privacy-preserving Federated Learning
DP-SGD Method
Communication Optimization
🔎 Similar Papers
No similar papers found.
U
Uday Bhaskar
Machine Learning Lab, IIIT Hyderabad
V
Varul Srivastava
Machine Learning Lab, IIIT Hyderabad
A
Avyukta Manjunatha Vummintala
Machine Learning Lab, IIIT Hyderabad
N
Naresh Manwani
Machine Learning Lab, IIIT Hyderabad
Sujit Gujar
Sujit Gujar
International Institute of Information Technology, Hyderabad
Game Theory and Mechanism DesignAlgorithmic Mechanism DesignMachine Learning and Mechanism