DRAUN: An Algorithm-Agnostic Data Reconstruction Attack on Federated Unlearning Systems

๐Ÿ“… 2025-06-02
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Federated unlearning (FU) enables clients to remove the influence of their data but introduces a novel threat: malicious servers can reconstruct non-deleted original samples from unlearning updatesโ€”termed a data reconstruction attack (DRA). Method: This work is the first to demonstrate the feasibility of DRAs in FU settings and proposes DRAUN, the first DRA framework tailored for decentralized federated environments. DRAUN overcomes the fundamental limitation that centralized reconstruction attacks fail under federated constraints by introducing an algorithm-agnostic, optimization-based gradient inversion method. It jointly models client-side local updates, incorporates adversarial loss, and employs multi-step iterative reconstruction. Contribution/Results: Evaluated across four datasets, four model architectures, and five state-of-the-art FU algorithms, DRAUN successfully reconstructs forgotten samples, exposing critical privacy vulnerabilities in existing FU protocols.

Technology Category

Application Category

๐Ÿ“ Abstract
Federated Unlearning (FU) enables clients to remove the influence of specific data from a collaboratively trained shared global model, addressing regulatory requirements such as GDPR and CCPA. However, this unlearning process introduces a new privacy risk: A malicious server may exploit unlearning updates to reconstruct the data requested for removal, a form of Data Reconstruction Attack (DRA). While DRAs for machine unlearning have been studied extensively in centralized Machine Learning-as-a-Service (MLaaS) settings, their applicability to FU remains unclear due to the decentralized, client-driven nature of FU. This work presents DRAUN, the first attack framework to reconstruct unlearned data in FU systems. DRAUN targets optimization-based unlearning methods, which are widely adopted for their efficiency. We theoretically demonstrate why existing DRAs targeting machine unlearning in MLaaS fail in FU and show how DRAUN overcomes these limitations. We validate our approach through extensive experiments on four datasets and four model architectures, evaluating its performance against five popular unlearning methods, effectively demonstrating that state-of-the-art FU methods remain vulnerable to DRAs.
Problem

Research questions and friction points this paper is trying to address.

Reconstructing unlearned data in Federated Unlearning systems
Addressing privacy risks from malicious server exploitation
Overcoming limitations of existing Data Reconstruction Attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Algorithm-agnostic attack on federated unlearning
Exploits unlearning updates for data reconstruction
Targets optimization-based unlearning methods
๐Ÿ”Ž Similar Papers
No similar papers found.
Hithem Lamri
Hithem Lamri
NYUAD
Machine LearningFederated LearningDifferential Privacy
Manaar Alam
Manaar Alam
Post-Doctoral Associate, New York University Abu Dhabi
Deep Learning SecuritySystem SecurityHardware Security
H
Haiyan Jiang
Center for Cyber Security, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
M
Michail Maniatakos
Center for Cyber Security, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates