Mjolnir: Breaking the Shield of Perturbation-Protected Gradients via Adaptive Diffusion

📅 2024-07-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work exposes fundamental security vulnerabilities in Gaussian/Laplacian-noise-based gradient privacy mechanisms widely adopted in federated learning. We propose the first black-box attack framework—requiring no access to the original model or external data—that effectively strips off injected noise to recover original gradients. Our approach leverages the diffusion properties of additive noise to construct a lightweight surrogate client model, and integrates gradient structural priors with an adaptive reverse-diffusion sampling strategy for efficient gradient denoising. Crucially, it invalidates the conventional differential privacy assumption that noise is irreversible. Empirical evaluation on DNNs and CNNs demonstrates substantial improvements in gradient reconstruction fidelity, revealing severe privacy leakage risks in mainstream gradient-level differential privacy mechanisms deployed in federated learning settings.

Technology Category

Application Category

📝 Abstract
Perturbation-based mechanisms, such as differential privacy, mitigate gradient leakage attacks by introducing noise into the gradients, thereby preventing attackers from reconstructing clients' private data from the leaked gradients. However, can gradient perturbation protection mechanisms truly defend against all gradient leakage attacks? In this paper, we present the first attempt to break the shield of gradient perturbation protection in Federated Learning for the extraction of private information. We focus on common noise distributions, specifically Gaussian and Laplace, and apply our approach to DNN and CNN models. We introduce Mjolnir, a perturbation-resilient gradient leakage attack that is capable of removing perturbations from gradients without requiring additional access to the original model structure or external data. Specifically, we leverage the inherent diffusion properties of gradient perturbation protection to develop a novel diffusion-based gradient denoising model for Mjolnir. By constructing a surrogate client model that captures the structure of perturbed gradients, we obtain crucial gradient data for training the diffusion model. We further utilize the insight that monitoring disturbance levels during the reverse diffusion process can enhance gradient denoising capabilities, allowing Mjolnir to generate gradients that closely approximate the original, unperturbed versions through adaptive sampling steps. Extensive experiments demonstrate that Mjolnir effectively recovers the protected gradients and exposes the Federated Learning process to the threat of gradient leakage, achieving superior performance in gradient denoising and private data recovery.
Problem

Research questions and friction points this paper is trying to address.

Federated Learning
Differential Privacy
Gradient Leakage
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mjolnir
Gradient De-randomization
Privacy Attack in Federated Learning
🔎 Similar Papers
No similar papers found.
X
Xuan Liu
The Hong Kong Polytechnic University, Hong Kong, China
S
Siqi Cai
Wuhan University of Technology, Wuhan, China
Qihua Zhou
Qihua Zhou
Shenzhen University
Edge AI SystemsTiny Machine LearningOn-Device LearningDistributed Machine Learning
Song Guo
Song Guo
Chair Professor of CSE, HKUST
Large Language ModelEdge AIMachine Learning Systems
Ruibin Li
Ruibin Li
University of Toronto
Persistent MemoryFile System
K
Kai Lin
Wuhan University of Technology, Wuhan, China