🤖 AI Summary
Gradient inversion attacks (GIAs) in federated learning (FL) are widely perceived as a severe privacy threat, yet their practical feasibility under realistic FL conditions remains inadequately assessed.
Method: This work systematically evaluates GIA effectiveness and limitations across real-world FL settings—incorporating non-IID data, multiple local training epochs, and model heterogeneity—by modeling attack success along three dimensions: training configuration, model architecture, and gradient post-processing. We combine theoretical analysis with large-scale empirical experiments.
Contribution/Results: We find that GIA success rates drop significantly under realistic conditions; most existing attacks rely on unrealistic auxiliary assumptions (e.g., public data or perfect model knowledge). Moreover, simple defenses—such as gradient clipping or lightweight noise injection—achieve robust protection. Our study corrects the overestimation of gradient leakage risks, providing a more pragmatic security assessment framework and actionable defense guidelines for FL privacy preservation.
📝 Abstract
Federated learning (FL) facilitates collaborative model training among multiple clients without raw data exposure. However, recent studies have shown that clients' private training data can be reconstructed from shared gradients in FL, a vulnerability known as gradient inversion attacks (GIAs). While GIAs have demonstrated effectiveness under emph{ideal settings and auxiliary assumptions}, their actual efficacy against emph{practical FL systems} remains under-explored. To address this gap, we conduct a comprehensive study on GIAs in this work. We start with a survey of GIAs that establishes a timeline to trace their evolution and develops a systematization to uncover their inherent threats. By rethinking GIA in practical FL systems, three fundamental aspects influencing GIA's effectiveness are identified: extit{training setup}, extit{model}, and extit{post-processing}. Guided by these aspects, we perform extensive theoretical and empirical evaluations of SOTA GIAs across diverse settings. Our findings highlight that GIA is notably extit{constrained}, extit{fragile}, and extit{easily defensible}. Specifically, GIAs exhibit inherent limitations against practical local training settings. Additionally, their effectiveness is highly sensitive to the trained model, and even simple post-processing techniques applied to gradients can serve as effective defenses. Our work provides crucial insights into the limited threats of GIAs in practical FL systems. By rectifying prior misconceptions, we hope to inspire more accurate and realistic investigations on this topic.