Clients Collaborate: Flexible Differentially Private Federated Learning with Guaranteed Improvement of Utility-Privacy Trade-off

📅 2024-02-10
🏛️ arXiv.org
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
In differentially private federated learning (DP-FL), accumulated noise degrades semantic coherence and impedes the simultaneous optimization of utility and privacy. Method: This paper proposes FedCEO—a novel framework that introduces client-side collaborative mechanisms into DP-FL and integrates server-side tensor low-rank proximal optimization with high-frequency spectral truncation to dynamically smooth global semantics in the spectral domain. Contribution/Results: We theoretically establish an improved utility–privacy trade-off bound scaling as √d. FedCEO supports adaptive semantic recovery under multiple privacy budgets during continuous training. Extensive experiments on image datasets demonstrate that, while strictly satisfying (ε,δ)-differential privacy, FedCEO significantly improves model accuracy and outperforms state-of-the-art methods in utility–privacy trade-off.

Technology Category

Application Category

📝 Abstract
To defend against privacy leakage of user data, differential privacy is widely used in federated learning, but it is not free. The addition of noise randomly disrupts the semantic integrity of the model and this disturbance accumulates with increased communication rounds. In this paper, we introduce a novel federated learning framework with rigorous privacy guarantees, named FedCEO, designed to strike a trade-off between model utility and user privacy by letting clients ''Collaborate with Each Other''. Specifically, we perform efficient tensor low-rank proximal optimization on stacked local model parameters at the server, demonstrating its capability to flexibly truncate high-frequency components in spectral space. This implies that our FedCEO can effectively recover the disrupted semantic information by smoothing the global semantic space for different privacy settings and continuous training processes. Moreover, we improve the SOTA utility-privacy trade-off bound by an order of $sqrt{d}$, where $d$ is the input dimension. We illustrate our theoretical results with experiments on representative image datasets. It observes significant performance improvements and strict privacy guarantees under different privacy settings.
Problem

Research questions and friction points this paper is trying to address.

Balancing model utility and user privacy in federated learning
Mitigating semantic disruption from differential privacy noise
Improving utility-privacy trade-off bound by order of sqrt(d)
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated learning with differential privacy guarantees
Tensor low-rank proximal optimization for semantic recovery
Improved utility-privacy trade-off bound by sqrt(d)
🔎 Similar Papers
No similar papers found.
Y
Yuecheng Li
Sun Yat-sen University, Guangzhou
T
Tong Wang
Sun Yat-sen University, Guangzhou
Chuan Chen
Chuan Chen
University of Wisconsin, Madison
Applied Microeconomics
J
Jian Lou
Zhejiang University, Hangzhou
B
Bin Chen
Harbin Institute of Technology, Shenzhen
L
Lei Yang
Sun Yat-sen University, Guangzhou
Zibin Zheng
Zibin Zheng
IEEE Fellow, Highly Cited Researcher, Sun Yat-sen University, China
BlockchainSmart ContractServices ComputingSoftware Reliability