Local Differential Privacy Is Not Enough: A Sample Reconstruction Attack Against Federated Learning With Local Differential Privacy

📅 2025-02-12
🏛️ IEEE Transactions on Information Forensics and Security
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work exposes a fundamental vulnerability in Local Differential Privacy (LDP) as a defense against sample reconstruction attacks in Federated Learning (FL). While existing attacks fail due to gradient clipping and LDP-induced perturbation, we propose the first attack capable of reconstructing users’ sensitive training samples without significantly degrading model accuracy (<0.5% drop). Our method employs a two-stage framework: (1) feature-driven gradient compression to eliminate redundancy, and (2) denoising-based reconstruction leveraging zero-gradient noise modeling and adaptive confidence-interval filtering for high-fidelity recovery. We provide theoretical guarantees of its efficacy. Empirical evaluation on CIFAR-10 and FEMNIST demonstrates a 32% improvement in PSNR and SSIM scores exceeding 0.78—marking the first successful reconstruction of authentic user samples under LDP-FL.

Technology Category

Application Category

📝 Abstract
Reconstruction attacks against federated learning (FL) aim to reconstruct users’ samples through users’ uploaded gradients. Local differential privacy (LDP) is regarded as an effective defense against various attacks, including sample reconstruction in FL, where gradients are clipped and perturbed. Existing attacks are ineffective in FL with LDP since clipped and perturbed gradients obliterate most sample information for reconstruction. Besides, existing attacks embed additional sample information into gradients to improve the attack effect and cause gradient expansion, leading to a more severe gradient clipping in FL with LDP. In this paper, we propose a sample reconstruction attack against LDP-based FL with any target models to reconstruct victims’ sensitive samples to illustrate that FL with LDP is not flawless. Considering gradient expansion in reconstruction attacks and noise in LDP, the core of the proposed attack is gradient compression and reconstructed sample denoising. For gradient compression, an inference structure based on sample characteristics is presented to reduce redundant gradients against LDP. For reconstructed sample denoising, we artificially introduce zero gradients to observe noise distribution and scale confidence interval to filter the noise. Theoretical proof guarantees the effectiveness of the proposed attack. Evaluations show that the proposed attack is the only attack that reconstructs victims’ training samples in LDP-based FL and has little impact on the target model’s accuracy. We conclude that LDP-based FL needs further improvements to defend against sample reconstruction attacks effectively.
Problem

Research questions and friction points this paper is trying to address.

Proposes a sample reconstruction attack on LDP-based FL
Challenges the effectiveness of Local Differential Privacy
Introduces gradient compression and denoising techniques
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gradient compression reduces redundant gradients
Sample denoising filters noise effectively
Inference structure based on sample characteristics
🔎 Similar Papers
No similar papers found.
Z
Zhichao You
School of Computer Science & Technology, Xidian University, and Shaanxi Key Laboratory of Network and System Security, Xi’an, China
Xuewen Dong
Xuewen Dong
Xidian University
S
Shujun Li
School of Computing and the Institute of Cyber Security for Society (iCSS), University of Kent, Canterbury, UK
X
Ximeng Liu
College of Computer and Data Science, Fuzhou University, Fuzhou, China
Siqi Ma
Siqi Ma
The University of Wollongong
CybersecuritySoftware EngineeringAI Security
Yulong Shen
Yulong Shen
Xidian University
computer security