🤖 AI Summary
Addressing the challenge of jointly optimizing differential privacy (DP), model convergence, and fairness in personalized federated learning (PFL), this paper proposes DP-Ditto—the first PFL framework that simultaneously balances these three objectives under strict DP constraints. Theoretically, we derive the first convergence upper bound for personalized models under DP and identify the optimal number of global aggregation rounds; further, we formally establish the feasibility of jointly optimizing convergence and fairness. Methodologically, DP-Ditto tightly integrates the Ditto architecture with an adaptive DP mechanism and introduces a verifiable fairness metric and corresponding optimization strategy. Extensive experiments on multiple benchmark datasets demonstrate that DP-Ditto significantly outperforms state-of-the-art baselines—including FedAMP and pFedMe—achieving a 9.66% improvement in accuracy and a 32.71% gain in fairness.
📝 Abstract
Personalized federated learning (PFL), e.g., the renowned Ditto, strikes a balance between personalization and generalization by conducting federated learning (FL) to guide personalized learning (PL). While FL is unaffected by personalized model training, in Ditto, PL depends on the outcome of the FL. However, the clients’ concern about their privacy and consequent perturbation of their local models can affect the convergence and (performance) fairness of PL. This paper presents PFL, called DP-Ditto, which is a non-trivial extension of Ditto under the protection of differential privacy (DP), and analyzes the trade-off among its privacy guarantee, model convergence, and performance distribution fairness. We also analyze the convergence upper bound of the personalized models under DP-Ditto and derive the optimal number of global aggregations given a privacy budget. Further, we analyze the performance fairness of the personalized models, and reveal the feasibility of optimizing DP-Ditto jointly for convergence and fairness. Experiments validate our analysis and demonstrate that DP-Ditto can surpass the DP-perturbed versions of the state-of-the-art PFL models, such as FedAMP, pFedMe, APPLE, and FedALA, by over 32.71% in fairness and 9.66% in accuracy.