Unlearning through Knowledge Overwriting: Reversible Federated Unlearning via Selective Sparse Adapter

📅 2025-02-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In federated learning, existing unlearning methods struggle to simultaneously achieve selectivity, reversibility, and low computational overhead—often resulting in erroneous cross-client knowledge deletion, irreversible operations, and prohibitive training costs. To address this, we propose a novel framework for privacy-sensitive, controllable knowledge unlearning. Our approach first identifies target knowledge via layer-wise sensitivity analysis; then introduces reversible sparse adapters to enable fine-grained, parameter-level control over forgetting; and finally replaces full-model retraining with a knowledge-overwriting mechanism. Crucially, the original model parameters remain fully intact throughout the process, enabling on-demand, layer-wise, and fully reversible unlearning. Empirical evaluation across three benchmark datasets demonstrates that our method matches the unlearning efficacy of complete retraining while reducing unlearning computation by 92% on average—significantly outperforming all baseline approaches.

Technology Category

Application Category

📝 Abstract
Federated Learning is a promising paradigm for privacy-preserving collaborative model training. In practice, it is essential not only to continuously train the model to acquire new knowledge but also to guarantee old knowledge the right to be forgotten (i.e., federated unlearning), especially for privacy-sensitive information or harmful knowledge. However, current federated unlearning methods face several challenges, including indiscriminate unlearning of cross-client knowledge, irreversibility of unlearning, and significant unlearning costs. To this end, we propose a method named FUSED, which first identifies critical layers by analyzing each layer's sensitivity to knowledge and constructs sparse unlearning adapters for sensitive ones. Then, the adapters are trained without altering the original parameters, overwriting the unlearning knowledge with the remaining knowledge. This knowledge overwriting process enables FUSED to mitigate the effects of indiscriminate unlearning. Moreover, the introduction of independent adapters makes unlearning reversible and significantly reduces the unlearning costs. Finally, extensive experiments on three datasets across various unlearning scenarios demonstrate that FUSED's effectiveness is comparable to Retraining, surpassing all other baselines while greatly reducing unlearning costs.
Problem

Research questions and friction points this paper is trying to address.

Addresses indiscriminate unlearning in federated learning
Ensures reversibility and reduces unlearning costs
Overwrites sensitive knowledge without altering original parameters
Innovation

Methods, ideas, or system contributions that make the work stand out.

Selective sparse adapters for sensitive layers
Reversible unlearning via independent adapters
Knowledge overwriting reduces unlearning costs
🔎 Similar Papers
No similar papers found.
Zhengyi Zhong
Zhengyi Zhong
National University of Defense Technology
federated learningdomain adaptioncontinual learningmachine unlearning
W
Weidong Bao
Laboratory for Big Data and Decision, National University of Defense Technology, China
J
Ji Wang
Laboratory for Big Data and Decision, National University of Defense Technology, China
S
Shuai Zhang
Laboratory for Big Data and Decision, National University of Defense Technology, China
J
Jingxuan Zhou
Laboratory for Big Data and Decision, National University of Defense Technology, China
Lingjuan Lyu
Lingjuan Lyu
Sony
Foundation ModelsFederated LearningResponsible AI
Wei Yang Bryan Lim
Wei Yang Bryan Lim
Assistant Professor, Nanyang Technological University (NTU), Singapore
Edge IntelligenceFederated LearningApplied AISustainable AI