Exploring the Vulnerabilities of Federated Learning: A Deep Dive into Gradient Inversion Attacks

📅 2025-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Gradient inversion attacks (GIAs) in federated learning (FL) pose severe privacy threats by reconstructing clients’ private training data from shared gradients. Method: This work systematically characterizes GIA threat models under practical constraints and introduces the first taxonomy—optimization-based (OP-GIA), generative, and analytical GIAs—evaluated across diverse FL frameworks via large-scale experiments. It identifies OP-GIA as the most realistic and effective variant. To counter this, the paper proposes a lightweight, three-stage defense pipeline tailored for FL systems, designed to preserve model utility while substantially enhancing robustness against GIAs. Contribution/Results: The study establishes empirical performance boundaries for all three GIA classes, successfully reproduces OP-GIA in realistic FL deployments, and provides a deployable privacy–utility trade-off framework. It further outlines a collaborative attack–defense research roadmap, advancing both theoretical understanding and practical mitigation of gradient-based privacy leakage in FL.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) has emerged as a promising privacy-preserving collaborative model training paradigm without sharing raw data. However, recent studies have revealed that private information can still be leaked through shared gradient information and attacked by Gradient Inversion Attacks (GIA). While many GIA methods have been proposed, a detailed analysis, evaluation, and summary of these methods are still lacking. Although various survey papers summarize existing privacy attacks in FL, few studies have conducted extensive experiments to unveil the effectiveness of GIA and their associated limiting factors in this context. To fill this gap, we first undertake a systematic review of GIA and categorize existing methods into three types, i.e., extit{optimization-based} GIA (OP-GIA), extit{generation-based} GIA (GEN-GIA), and extit{analytics-based} GIA (ANA-GIA). Then, we comprehensively analyze and evaluate the three types of GIA in FL, providing insights into the factors that influence their performance, practicality, and potential threats. Our findings indicate that OP-GIA is the most practical attack setting despite its unsatisfactory performance, while GEN-GIA has many dependencies and ANA-GIA is easily detectable, making them both impractical. Finally, we offer a three-stage defense pipeline to users when designing FL frameworks and protocols for better privacy protection and share some future research directions from the perspectives of attackers and defenders that we believe should be pursued. We hope that our study can help researchers design more robust FL frameworks to defend against these attacks.
Problem

Research questions and friction points this paper is trying to address.

Analyzing vulnerabilities in Federated Learning from Gradient Inversion Attacks.
Evaluating effectiveness and limitations of different Gradient Inversion Attack methods.
Proposing a defense pipeline for better privacy protection in FL frameworks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Systematic review of Gradient Inversion Attacks (GIA).
Categorizes GIA into optimization, generation, analytics types.
Proposes three-stage defense pipeline for FL privacy.
🔎 Similar Papers
No similar papers found.
P
Pengxin Guo
School of Computing and Data Science, The University of Hong Kong, Hong Kong 999077, China
R
Runxi Wang
School of Computing and Data Science, The University of Hong Kong, Hong Kong 999077, China
Shuang Zeng
Shuang Zeng
Peking University, Georgia Institute of Technology
Self-supervised Contrastive LearningMedical Image SegmentationSuperpixelLarge Language Model
Jinjing Zhu
Jinjing Zhu
HKUST(GZ); Tsinghua University; HUST
Efficient AIMultimodal LearningLarge Language ModelMedical Image Analysis
H
Haoning Jiang
Department of Electronic and Electrical Engineering, Southern University of Science and Technology, Shenzhen 518055, China
Yanran Wang
Yanran Wang
Imperial College London
AI SafetyTrustworthy Machine LearningRobot LearningAerial Robotics
Yuyin Zhou
Yuyin Zhou
Assistant Professor, Computer Science and Engineering, Genomics Institute, UC Santa Cruz
medical image analysismachine learningcomputer visionAI in healthcare
F
Feifei Wang
Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong 999077, China, and also with the Materials Innovation Institute for Life Sciences and Energy (MILES), HKU-SIRI, Shenzhen 518055, China
Hui Xiong
Hui Xiong
Senior Scientist, Candela Corporation
Ultrafast dynamicsatomic molecular physicsfree electron laser
Liangqiong Qu
Liangqiong Qu
The University of Hong Kong
Medical Image AnalysisImage SynthesisIllumination ModelingFederated Learning