Differentially Private Federated Learning: A Systematic Review

๐Ÿ“… 2024-05-14
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 24
โœจ Influential: 1
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing research on integrating federated learning (FL) with differential privacy (DP) lacks a systematic survey, particularly overlooking distinctions among privacy targets (e.g., clients, gradients, models) and privacy guarantee strengths (e.g., LDP, GDP, CDP) across FL architectural layers. Method: We conduct a systematic literature review (SLR), complemented by theoretical analysis of DP principles and decomposition of FL system architecture, to develop a unified taxonomy. Contribution/Results: This work introduces the first comprehensive classification framework that jointly considers privacy definitions, guarantee levels, and FL layer-specific contexts. It constructs a structured knowledge graph mapping protection boundaries and applicability conditions for each mechanism, uncovers evolutionary trends in privacy-preserving FL, and identifies fundamental bottlenecks in the privacyโ€“utility trade-off. The framework provides reusable evaluation criteria and a practical roadmap for designing privacy-enhanced FL systems.

Technology Category

Application Category

๐Ÿ“ Abstract
In recent years, privacy and security concerns in machine learning have promoted trusted federated learning to the forefront of research. Differential privacy has emerged as the de facto standard for privacy protection in federated learning due to its rigorous mathematical foundation and provable guarantee. Despite extensive research on algorithms that incorporate differential privacy within federated learning, there remains an evident deficiency in systematic reviews that categorize and synthesize these studies. Our work presents a systematic overview of the differentially private federated learning. Existing taxonomies have not adequately considered objects and level of privacy protection provided by various differential privacy models in federated learning. To rectify this gap, we propose a new taxonomy of differentially private federated learning based on definition and guarantee of various differential privacy models and federated scenarios. Our classification allows for a clear delineation of the protected objects across various differential privacy models and their respective neighborhood levels within federated learning environments. Furthermore, we explore the applications of differential privacy in federated learning scenarios. Our work provide valuable insights into privacy-preserving federated learning and suggest practical directions for future research.
Problem

Research questions and friction points this paper is trying to address.

Systematically reviews differentially private federated learning studies
Proposes taxonomy based on privacy models and federated scenarios
Explores differential privacy applications in federated learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Systematic review of differentially private federated learning
New taxonomy based on privacy models and scenarios
Classification delineates protected objects and levels
๐Ÿ”Ž Similar Papers
No similar papers found.
J
Jie Fu
Stevens Institute of Technology, USA
Yuan Hong
Yuan Hong
University of Connecticut
SecurityPrivacyAI SecurityApplied Cryptography
Xinpeng Ling
Xinpeng Ling
Tongji University
Federated LearningDifferential PrivacyConvex Optimization
Leixia Wang
Leixia Wang
Renmin University of China, China
X
Xun Ran
The Hong Kong Polytechnic University, China
Z
Zhiyu Sun
East China Normal University, China
Wendy Hui Wang
Wendy Hui Wang
Stevens Institute of Technology
Securityprivacyrobustnessfairness of machine learning
Z
Zhili Chen
East China Normal University, China
Y
Yang Cao
Tokyo Institute of Technology, Japan