Jellyfish: Zero-Shot Federated Unlearning Scheme with Knowledge Disentanglement

πŸ“… 2026-04-05
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the critical challenge in federated learning of effectively erasing a model’s memory of a user’s data upon request while preserving model utility. The authors propose a zero-shot federated unlearning method that achieves efficient forgetting and model restoration without accessing the original user data. Their approach innovatively integrates knowledge disentanglement, synthetic proxy data generation, and a multi-objective composite loss function comprising hard, confusion, and distillation losses, further enhanced by gradient harmonization and masking mechanisms. Experimental results demonstrate that the proposed framework consistently ensures strong privacy guarantees across diverse settings and successfully recovers model accuracy to near-original levels.
πŸ“ Abstract
With the increasing importance of data privacy and security, federated unlearning emerges as a new research field dedicated to ensuring that once specific data is deleted, federated learning models no longer retain or disclose related information. In this paper, we propose a zero-shot federated unlearning scheme, named Jellyfish. It distinguishes itself from conventional federated unlearning frameworks in four key aspects: synthetic data generation, knowledge disentanglement, loss function design, and model repair. To preserve the privacy of forgotten data, we design a zero-shot unlearning mechanism that generates error-minimization noise as proxy data for the data to be forgotten. To maintain model utility, we first propose a knowledge disentanglement mechanism that regularises the output of the final convolutional layer by restricting the number of activated channels for the data to be forgotten and encouraging activation sparsity. Next, we construct a comprehensive loss function that incorporates multiple components, including hard loss, confusion loss, distillation loss, model weight drift loss, gradient harmonization, and gradient masking, to effectively align the learning trajectories of the objectives of ``forgetting" and ``retaining". Finally, we propose a zero-shot repair mechanism that leverages proxy data to restore model accuracy within acceptable bounds without accessing users' local data. To evaluate the performance of the proposed zero-shot federated unlearning scheme, we conducted comprehensive experiments across diverse settings. The results validate the effectiveness and robustness of the scheme.
Problem

Research questions and friction points this paper is trying to address.

federated unlearning
data privacy
zero-shot learning
knowledge disentanglement
model forgetting
Innovation

Methods, ideas, or system contributions that make the work stand out.

zero-shot federated unlearning
knowledge disentanglement
synthetic proxy data
multi-component loss function
model repair
πŸ”Ž Similar Papers
No similar papers found.
H
Houzhe Wang
Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China
Xiaojie Zhu
Xiaojie Zhu
Staff Research Scientist
Data PrivacyApplied CryptographyCybersecurityDistributed System
C
Chi Chen
Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China