🤖 AI Summary
This work addresses the inefficiencies of existing machine unlearning methods in diffusion models, which often suffer from prolonged training times, high computational costs, and unstable convergence due to suboptimal gradient update directions. To overcome these limitations, the authors propose an efficient unlearning approach grounded in a knowledge distillation framework. The method leverages a saliency mask to precisely identify parameters most critical to the target forgetting task and integrates a gradient-guided mechanism to enable targeted and efficient parameter updates. Evaluated on CIFAR-10 and STL-10 datasets across class- and concept-level unlearning tasks, the proposed method significantly reduces training time while generating samples that better align with the true data distribution. It achieves unlearning performance on par with or superior to current state-of-the-art methods, effectively balancing unlearning efficacy and computational efficiency.
📝 Abstract
Machine unlearning (MU) has become a critical technique for GenAI models' safe and compliant operation. While existing MU methods are effective, most impose prohibitive training time and computational overhead. Our analysis suggests the root cause lies in poorly directed gradient updates, which reduce training efficiency and destabilize convergence. To mitigate these issues, we propose PECKER, an efficient MU approach that matches or outperforms prevailing methods. Within a distillation framework, PECKER introduces a saliency mask to prioritize updates to parameters that contribute most to forgetting the targeted data, thereby reducing unnecessary gradient computation and shortening overall training time without sacrificing unlearning efficacy. Our method generates samples that unlearn related class or concept more quickly, while closely aligning with the true image distribution on CIFAR-10 and STL-10 datasets, achieving shorter training times for both class forgetting and concept forgetting.