SoK: On Gradient Leakage in Federated Learning

📅 2024-04-08
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Gradient inversion attacks (GIAs) in federated learning (FL) are widely perceived as a severe privacy threat, yet their practical feasibility under realistic FL conditions remains inadequately assessed. Method: This work systematically evaluates GIA effectiveness and limitations across real-world FL settings—incorporating non-IID data, multiple local training epochs, and model heterogeneity—by modeling attack success along three dimensions: training configuration, model architecture, and gradient post-processing. We combine theoretical analysis with large-scale empirical experiments. Contribution/Results: We find that GIA success rates drop significantly under realistic conditions; most existing attacks rely on unrealistic auxiliary assumptions (e.g., public data or perfect model knowledge). Moreover, simple defenses—such as gradient clipping or lightweight noise injection—achieve robust protection. Our study corrects the overestimation of gradient leakage risks, providing a more pragmatic security assessment framework and actionable defense guidelines for FL privacy preservation.

Technology Category

Application Category

📝 Abstract
Federated learning (FL) facilitates collaborative model training among multiple clients without raw data exposure. However, recent studies have shown that clients' private training data can be reconstructed from shared gradients in FL, a vulnerability known as gradient inversion attacks (GIAs). While GIAs have demonstrated effectiveness under emph{ideal settings and auxiliary assumptions}, their actual efficacy against emph{practical FL systems} remains under-explored. To address this gap, we conduct a comprehensive study on GIAs in this work. We start with a survey of GIAs that establishes a timeline to trace their evolution and develops a systematization to uncover their inherent threats. By rethinking GIA in practical FL systems, three fundamental aspects influencing GIA's effectiveness are identified: extit{training setup}, extit{model}, and extit{post-processing}. Guided by these aspects, we perform extensive theoretical and empirical evaluations of SOTA GIAs across diverse settings. Our findings highlight that GIA is notably extit{constrained}, extit{fragile}, and extit{easily defensible}. Specifically, GIAs exhibit inherent limitations against practical local training settings. Additionally, their effectiveness is highly sensitive to the trained model, and even simple post-processing techniques applied to gradients can serve as effective defenses. Our work provides crucial insights into the limited threats of GIAs in practical FL systems. By rectifying prior misconceptions, we hope to inspire more accurate and realistic investigations on this topic.
Problem

Research questions and friction points this paper is trying to address.

Evaluates gradient inversion attacks in federated learning
Assesses privacy risks from shared gradients
Identifies constraints and defenses against data reconstruction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gradient inversion attacks study
Practical FL systems analysis
Post-processing defense techniques
🔎 Similar Papers
No similar papers found.
Jiacheng Du
Jiacheng Du
Zhejiang University
Trustworthy AI
Jiahui Hu
Jiahui Hu
Postdoctoral researcher, Embry-Riddle Aeronautical University
Machine learningdata assimilationatmospheric scienceionosphere
Z
Zhibo Wang
The State Key Laboratory of Blockchain and DataSecurity, Zhejiang University, China; School of Cyber Science and Technology, Zhejiang University, China
P
Peng Sun
College of Computer Science and Electronic Engineering, Hunan University, China
N
N. Gong
Department of Electrical and Computer Engineering, Duke University, USA
Kui Ren
Kui Ren
Professor and Dean of Computer Science, Zhejiang University, ACM/IEEE Fellow
Data Security & PrivacyAI SecurityIoT & Vehicular Security