Federated Learning Nodes Can Reconstruct Peers' Image Data

📅 2024-10-07
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
In federated learning (FL), honest-but-curious clients can launch peer data reconstruction attacks via gradient inversion, causing semantic-level privacy leakage. This work is the first to demonstrate that, under standard centralized FL, a client can silently reconstruct other participants’ raw images using only sparse gradient differences across multiple model updates—without requiring access to model weights or auxiliary data. We propose a gradient inversion framework incorporating a Denoising Diffusion Implicit Model (DDIM) prior, coupled with multi-step differential analysis and regularized optimization, significantly improving reconstruction fidelity and recognizability. Experiments on CIFAR-10 and CelebA show high visual fidelity in reconstructed images; notably, some faces and objects remain accurately identifiable, confirming the practical feasibility of this silent attack. Our findings uncover a novel client-side privacy threat in FL and provide critical empirical evidence for designing effective defenses.

Technology Category

Application Category

📝 Abstract
Federated learning (FL) is a privacy-preserving machine learning framework that enables multiple nodes to train models on their local data and periodically average weight updates to benefit from other nodes' training. Each node's goal is to collaborate with other nodes to improve the model's performance while keeping its training data private. However, this framework does not guarantee data privacy. Prior work has shown that the gradient-sharing steps in FL can be vulnerable to data reconstruction attacks from an honest-but-curious central server. In this work, we show that an honest-but-curious node/client can also launch attacks to reconstruct peers' image data in a centralized system, presenting a severe privacy risk. We demonstrate that a single client can silently reconstruct other clients' private images using diluted information available within consecutive updates. We leverage state-of-the-art diffusion models to enhance the perceptual quality and recognizability of the reconstructed images, further demonstrating the risk of information leakage at a semantic level. This highlights the need for more robust privacy-preserving mechanisms that protect against silent client-side attacks during federated training.
Problem

Research questions and friction points this paper is trying to address.

FL nodes reconstruct peers' private image data via gradients
Client-side gradient inversion risks semantic information leakage
Current FL lacks robust privacy against silent reconstruction attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses gradient inversion for data reconstruction
Leverages diffusion models for image quality
Targets client-side attacks in federated learning
🔎 Similar Papers
No similar papers found.