Deep Leakage with Generative Flow Matching Denoiser

📅 2026-01-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a novel deep leakage attack in federated learning that leverages Flow Matching generative models as a prior to significantly enhance the fidelity and realism of reconstructed private client data. Unlike existing approaches, the method operates without access to the private data or its underlying distribution, yet achieves superior reconstruction quality across multiple datasets and model architectures. It consistently outperforms state-of-the-art techniques in terms of pixel-level accuracy, perceptual similarity, and feature-level alignment. Furthermore, the attack demonstrates remarkable robustness against common defense mechanisms—including noise injection, gradient clipping, and sparsification—as well as variations in training epochs, batch sizes, and defense strategies, highlighting a critical vulnerability in current federated learning paradigms.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) has emerged as a powerful paradigm for decentralized model training, yet it remains vulnerable to deep leakage (DL) attacks that reconstruct private client data from shared model updates. While prior DL methods have demonstrated varying levels of success, they often suffer from instability, limited fidelity, or poor robustness under realistic FL settings. We introduce a new DL attack that integrates a generative Flow Matching (FM) prior into the reconstruction process. By guiding optimization toward the distribution of realistic images (represented by a flow matching foundation model), our method enhances reconstruction fidelity without requiring knowledge of the private data. Extensive experiments on multiple datasets and target models demonstrate that our approach consistently outperforms state-of-the-art attacks across pixel-level, perceptual, and feature-based similarity metrics. Crucially, the method remains effective across different training epochs, larger client batch sizes, and under common defenses such as noise injection, clipping, and sparsification. Our findings call for the development of new defense strategies that explicitly account for adversaries equipped with powerful generative priors.
Problem

Research questions and friction points this paper is trying to address.

Deep Leakage
Federated Learning
Privacy Attack
Data Reconstruction
Generative Prior
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deep Leakage
Flow Matching
Federated Learning
Generative Prior
Data Reconstruction
🔎 Similar Papers
No similar papers found.