Convergence-Privacy-Fairness Trade-Off in Personalized Federated Learning

📅 2025-06-17
🏛️ IEEE Transactions on Machine Learning in Communications and Networking
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the challenge of jointly optimizing differential privacy (DP), model convergence, and fairness in personalized federated learning (PFL), this paper proposes DP-Ditto—the first PFL framework that simultaneously balances these three objectives under strict DP constraints. Theoretically, we derive the first convergence upper bound for personalized models under DP and identify the optimal number of global aggregation rounds; further, we formally establish the feasibility of jointly optimizing convergence and fairness. Methodologically, DP-Ditto tightly integrates the Ditto architecture with an adaptive DP mechanism and introduces a verifiable fairness metric and corresponding optimization strategy. Extensive experiments on multiple benchmark datasets demonstrate that DP-Ditto significantly outperforms state-of-the-art baselines—including FedAMP and pFedMe—achieving a 9.66% improvement in accuracy and a 32.71% gain in fairness.

Technology Category

Application Category

📝 Abstract
Personalized federated learning (PFL), e.g., the renowned Ditto, strikes a balance between personalization and generalization by conducting federated learning (FL) to guide personalized learning (PL). While FL is unaffected by personalized model training, in Ditto, PL depends on the outcome of the FL. However, the clients’ concern about their privacy and consequent perturbation of their local models can affect the convergence and (performance) fairness of PL. This paper presents PFL, called DP-Ditto, which is a non-trivial extension of Ditto under the protection of differential privacy (DP), and analyzes the trade-off among its privacy guarantee, model convergence, and performance distribution fairness. We also analyze the convergence upper bound of the personalized models under DP-Ditto and derive the optimal number of global aggregations given a privacy budget. Further, we analyze the performance fairness of the personalized models, and reveal the feasibility of optimizing DP-Ditto jointly for convergence and fairness. Experiments validate our analysis and demonstrate that DP-Ditto can surpass the DP-perturbed versions of the state-of-the-art PFL models, such as FedAMP, pFedMe, APPLE, and FedALA, by over 32.71% in fairness and 9.66% in accuracy.
Problem

Research questions and friction points this paper is trying to address.

Balancing privacy, convergence, and fairness in personalized federated learning
Analyzing trade-offs in differential privacy-protected PFL (DP-Ditto)
Optimizing global aggregations for convergence and fairness under privacy constraints
Innovation

Methods, ideas, or system contributions that make the work stand out.

DP-Ditto extends Ditto with differential privacy
Analyzes convergence-fairness-privacy trade-off in PFL
Optimizes global aggregations under privacy budget
🔎 Similar Papers
No similar papers found.
X
Xiyu Zhao
School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China
Qimei Cui
Qimei Cui
Professor , School of Information and Communication Engineering ,Beijing University of Posts and
B5G/6G wireless communicationsmobile computing and IoT
Weicai Li
Weicai Li
Beijing University of Posts and Telecommunications
Wireless Federated Learning
Wei Ni
Wei Ni
FIEEE, AAIA Fellow, Senior Principal Scientist & Conjoint Professor, CSIRO/UNSW
6G security and privacyconnected and trusted intelligenceapplied AI/ML
Ekram Hossain
Ekram Hossain
Professor, University of Manitoba, Canada, IEEE Fellow
Wireless communication networksradio resource allocationcognitive radiomulti-tier cellular networks
Q
Quan Z. Sheng
School of Computing, Macquarie University, Sydney, NSW 2109, Australia
Xiaofeng Tao
Xiaofeng Tao
Beijing University of Posts and Telecommunications
wireless communication
P
Ping Zhang
School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China; Department of Broadband Communication, Peng Cheng Laboratory, Shenzhen 518055, China