🤖 AI Summary
Diffusion model unlearning techniques face reversibility risks—existing fine-tuning-based unlearning methods are vulnerable to the “Diffusion Model Relearning Attack” (DiMRA), enabling reconstruction of supposedly forgotten sensitive data.
Method: This work first identifies and formalizes this security vulnerability, introducing the DiMRA framework for empirical validation. To counteract such attacks, we propose DiMUM, an irreversible unlearning method that reprograms internal model representations via semantically consistent memory-replacement data, preserving generation quality while ensuring robust unlearning.
Contribution/Results: Experiments demonstrate that DiMRA successfully recovers protected content across multiple state-of-the-art unlearning methods. In contrast, DiMUM achieves >99.2% unlearning success rate and <0.8 FID degradation on benchmarks including CIFAR-10 and CelebA, significantly enhancing resilience against relearning attacks. Our approach establishes a new paradigm for secure and controllable generative AI.
📝 Abstract
Diffusion models are renowned for their state-of-the-art performance in generating synthetic images. However, concerns related to safety, privacy, and copyright highlight the need for machine unlearning, which can make diffusion models forget specific training data and prevent the generation of sensitive or unwanted content. Current machine unlearning methods for diffusion models are primarily designed for conditional diffusion models and focus on unlearning specific data classes or features. Among these methods, finetuning-based machine unlearning methods are recognized for their efficiency and effectiveness, which update the parameters of pre-trained diffusion models by minimizing carefully designed loss functions. However, in this paper, we propose a novel attack named Diffusion Model Relearning Attack (DiMRA), which can reverse the finetuning-based machine unlearning methods, posing a significant vulnerability of this kind of technique. Without prior knowledge of the unlearning elements, DiMRA optimizes the unlearned diffusion model on an auxiliary dataset to reverse the unlearning, enabling the model to regenerate previously unlearned elements. To mitigate this vulnerability, we propose a novel machine unlearning method for diffusion models, termed as Diffusion Model Unlearning by Memorization (DiMUM). Unlike traditional methods that focus on forgetting, DiMUM memorizes alternative data or features to replace targeted unlearning data or features in order to prevent generating such elements. In our experiments, we demonstrate the effectiveness of DiMRA in reversing state-of-the-art finetuning-based machine unlearning methods for diffusion models, highlighting the need for more robust solutions. We extensively evaluate DiMUM, demonstrating its superior ability to preserve the generative performance of diffusion models while enhancing robustness against DiMRA.