PECKER: A Precisely Efficient Critical Knowledge Erasure Recipe For Machine Unlearning in Diffusion Models

📅 2026-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inefficiencies of existing machine unlearning methods in diffusion models, which often suffer from prolonged training times, high computational costs, and unstable convergence due to suboptimal gradient update directions. To overcome these limitations, the authors propose an efficient unlearning approach grounded in a knowledge distillation framework. The method leverages a saliency mask to precisely identify parameters most critical to the target forgetting task and integrates a gradient-guided mechanism to enable targeted and efficient parameter updates. Evaluated on CIFAR-10 and STL-10 datasets across class- and concept-level unlearning tasks, the proposed method significantly reduces training time while generating samples that better align with the true data distribution. It achieves unlearning performance on par with or superior to current state-of-the-art methods, effectively balancing unlearning efficacy and computational efficiency.
📝 Abstract
Machine unlearning (MU) has become a critical technique for GenAI models' safe and compliant operation. While existing MU methods are effective, most impose prohibitive training time and computational overhead. Our analysis suggests the root cause lies in poorly directed gradient updates, which reduce training efficiency and destabilize convergence. To mitigate these issues, we propose PECKER, an efficient MU approach that matches or outperforms prevailing methods. Within a distillation framework, PECKER introduces a saliency mask to prioritize updates to parameters that contribute most to forgetting the targeted data, thereby reducing unnecessary gradient computation and shortening overall training time without sacrificing unlearning efficacy. Our method generates samples that unlearn related class or concept more quickly, while closely aligning with the true image distribution on CIFAR-10 and STL-10 datasets, achieving shorter training times for both class forgetting and concept forgetting.
Problem

Research questions and friction points this paper is trying to address.

machine unlearning
diffusion models
computational overhead
training efficiency
gradient updates
Innovation

Methods, ideas, or system contributions that make the work stand out.

machine unlearning
diffusion models
saliency mask
knowledge erasure
model distillation
🔎 Similar Papers
No similar papers found.
Z
Zhiyong Ma
Cao Tu Li(Guangzhou) Technology Co., Ltd, China; South China University of Technology, China
Z
Zhitao Deng
Cao Tu Li(Guangzhou) Technology Co., Ltd, China; Guangzhou Xinhua University, China
Huan Tang
Huan Tang
The Wharton School, University of Pennsylvania
FinTechData EconomyBankingConsumer Finance
Jialin Chen
Jialin Chen
Yale University
Foundation ModelsGraph LearningMultimodal RAG
Z
Zhijun Zheng
Guangzhou Xinhua University, China
Z
Zhengping Li
Hong Kong Baptist University, HongKong
Q
Qingyuan Chuai
Cao Tu Li(Guangzhou) Technology Co., Ltd, China