Mitigating Privacy-Utility Trade-off in Decentralized Federated Learning via $f$-Differential Privacy

📅 2025-10-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inaccurate privacy budget accounting and the difficulty of balancing privacy protection with model utility in decentralized federated learning (DFL). To this end, we propose the first $f$-differential privacy ($f$-DP) accounting framework tailored to DFL. Methodologically, we introduce two novel mechanisms—Pairwise Network $f$-DP and Secret-based $f$-Local DP—that jointly incorporate random-walk communication, shared-key-driven noise scheduling, sparse local iterations, and structured noise injection, enabling theoretically grounded privacy amplification and precise accounting. Theoretically, by leveraging Markov chain concentration and decentralized topology analysis, we derive tighter $(varepsilon,delta)$-DP bounds. Empirically, our approach significantly outperforms Rényi DP baselines on both synthetic and real-world datasets, improving model utility by 12.7%–23.4%, thereby establishing the first empirical validation of $f$-DP’s theoretical superiority and practical viability in decentralized FL.

Technology Category

Application Category

📝 Abstract
Differentially private (DP) decentralized Federated Learning (FL) allows local users to collaborate without sharing their data with a central server. However, accurately quantifying the privacy budget of private FL algorithms is challenging due to the co-existence of complex algorithmic components such as decentralized communication and local updates. This paper addresses privacy accounting for two decentralized FL algorithms within the $f$-differential privacy ($f$-DP) framework. We develop two new $f$-DP-based accounting methods tailored to decentralized settings: Pairwise Network $f$-DP (PN-$f$-DP), which quantifies privacy leakage between user pairs under random-walk communication, and Secret-based $f$-Local DP (Sec-$f$-LDP), which supports structured noise injection via shared secrets. By combining tools from $f$-DP theory and Markov chain concentration, our accounting framework captures privacy amplification arising from sparse communication, local iterations, and correlated noise. Experiments on synthetic and real datasets demonstrate that our methods yield consistently tighter $(ε,δ)$ bounds and improved utility compared to Rényi DP-based approaches, illustrating the benefits of $f$-DP in decentralized privacy accounting.
Problem

Research questions and friction points this paper is trying to address.

Quantifying privacy budget in decentralized federated learning algorithms
Developing f-DP accounting methods for pairwise and secret-based privacy
Capturing privacy amplification from sparse communication and local iterations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pairwise Network f-DP for user pair privacy
Secret-based f-Local DP with shared secrets
Combining f-DP theory with Markov chain concentration
🔎 Similar Papers
No similar papers found.