Enhanced Privacy Leakage from Noise-Perturbed Gradients via Gradient-Guided Conditional Diffusion Models

πŸ“… 2025-11-13
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
In federated learning, noisy gradient perturbations still risk leaking sensitive training images, while existing gradient inversion attacks suffer severe performance degradation under noise. To address this, we propose Gradient-guided Denoising Diffusion Models (GD-DM), the first approach leveraging diffusion models’ denoising capability for gradient reconstruction without requiring prior knowledge of the target data distribution. GD-DM conditions the reverse denoising process on the observed noisy gradients, enabling robust image recovery. We theoretically derive bounds on reconstruction error and establish convergence guarantees for the attack, revealing how noise magnitude and model architecture jointly influence reconstruction fidelity. Extensive experiments demonstrate that GD-DM significantly outperforms state-of-the-art methods under Gaussian gradient noise: PSNR improves by up to 3.2 dB, confirming its superior robustness and effectiveness in high-noise regimes.

Technology Category

Application Category

πŸ“ Abstract
Federated learning synchronizes models through gradient transmission and aggregation. However, these gradients pose significant privacy risks, as sensitive training data is embedded within them. Existing gradient inversion attacks suffer from significantly degraded reconstruction performance when gradients are perturbed by noise-a common defense mechanism. In this paper, we introduce Gradient-Guided Conditional Diffusion Models (GG-CDMs) for reconstructing private images from leaked gradients without prior knowledge of the target data distribution. Our approach leverages the inherent denoising capability of diffusion models to circumvent the partial protection offered by noise perturbation, thereby improving attack performance under such defenses. We further provide a theoretical analysis of the reconstruction error bounds and the convergence properties of attack loss, characterizing the impact of key factors-such as noise magnitude and attacked model architecture-on reconstruction quality. Extensive experiments demonstrate our attack's superior reconstruction performance with Gaussian noise-perturbed gradients, and confirm our theoretical findings.
Problem

Research questions and friction points this paper is trying to address.

Reconstructing private images from noisy gradients in federated learning
Overcoming noise perturbation defenses using diffusion models
Analyzing reconstruction error bounds under various attack conditions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using diffusion models to reconstruct private images from gradients
Leveraging denoising capability to bypass noise perturbation defenses
Providing theoretical analysis of reconstruction error bounds
J
Jiayang Meng
School of Information, Renmin University of China, Beijing 100872, China
T
Tao Huang
School of Computer Science and Big Data, Minjiang University, Fuzhou, Fujian 350108, China
H
Hong Chen
School of Information, Renmin University of China, Beijing 100872, China
Chen Hou
Chen Hou
Associate Professor of Biological Sciences, Missouri University of Science and Technology
Ecophysiologyaginglife historyenergetics
G
Guolong Zheng
School of Computer Science and Big Data, Minjiang University, Fuzhou, Fujian 350108, China