Enhancing Gradient Inversion Attacks in Federated Learning via Hierarchical Feature Optimization

πŸ“… 2026-04-01
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the privacy risks in federated learning arising from shared gradients, which can be exploited by adversaries to reconstruct users’ original data. To this end, the authors propose GIFD, a novel method that extends gradient inversion attacks from the initial latent space to the multi-level intermediate feature spaces of a generative adversarial network (GAN). By optimizing reconstruction layer-by-layer, GIFD enhances both fidelity and generalization. The approach incorporates an ℓ₁-ball constrained regularizer to suppress unrealistic artifacts and integrates a label mapping mechanism to effectively handle out-of-distribution inputs and label mismatches. Extensive experiments demonstrate that GIFD achieves high-fidelity, pixel-level data reconstruction across diverse federated learning settings, significantly outperforming existing baseline methods.
πŸ“ Abstract
Federated Learning (FL) has emerged as a compelling paradigm for privacy-preserving distributed machine learning, allowing multiple clients to collaboratively train a global model by transmitting locally computed gradients to a central server without exposing their private data. Nonetheless, recent studies find that the gradients exchanged in the FL system are also vulnerable to privacy leakage, e.g., an attacker can invert shared gradients to reconstruct sensitive data by leveraging pre-trained generative adversarial networks (GAN) as prior knowledge. However, existing attacks simply perform gradient inversion in the latent space of the GAN model, which limits their expression ability and generalizability. To tackle these challenges, we propose \textbf{G}radient \textbf{I}nversion over \textbf{F}eature \textbf{D}omains (GIFD), which disassembles the GAN model and searches the hierarchical features of the intermediate layers. Instead of optimizing only over the initial latent code, we progressively change the optimized layer, from the initial latent space to intermediate layers closer to the output images. In addition, we design a regularizer to avoid unreal image generation by adding a small ${l_1}$ ball constraint to the searching range. We also extend GIFD to the out-of-distribution (OOD) setting, which weakens the assumption that the training sets of GANs and FL tasks obey the same data distribution. Furthermore, we consider the challenging OOD scenario of label inconsistency and propose a label mapping technique as an effective solution. Extensive experiments demonstrate that our method can achieve pixel-level reconstruction and outperform competitive baselines across a variety of FL scenarios.
Problem

Research questions and friction points this paper is trying to address.

Gradient Inversion
Federated Learning
Privacy Leakage
Out-of-Distribution
Data Reconstruction
Innovation

Methods, ideas, or system contributions that make the work stand out.

gradient inversion
federated learning
hierarchical feature optimization
out-of-distribution
label mapping
πŸ”Ž Similar Papers
No similar papers found.
Hao Fang
Hao Fang
Tsinghua University
Trustworthy AIAIGC Security
Wenbo Yu
Wenbo Yu
Tsinghua University
Deep LearningTrustworthy ML
B
Bin Chen
School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen 518055, Guangdong, China
X
Xuan Wang
School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen 518055, Guangdong, China
Shu-Tao Xia
Shu-Tao Xia
SIGS, Tsinghua University
coding and information theorymachine learningcomputer visionAI security
Q
Qing Liao
School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen 518055, Guangdong, China
K
Ke Xu
Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China