Data Unlearning in Diffusion Models

📅 2025-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion models exhibit strong memorization of training data, posing significant copyright and privacy compliance risks; however, full retraining is prohibitively expensive, necessitating efficient data forgetting techniques. Existing approaches either rely on anchor prompts—rendering them unsuitable for instance-level forgetting—or suffer from insufficient stability and efficacy. This paper introduces Subtracted Importance Sampled Scores (SISS), the first loss-function family for diffusion models with theoretical guarantees for exact, anchor-free, instance-level data unlearning. SISS integrates importance sampling, weighted multi-objective optimization, and reverse-process gradient regularization to decouple memorized patterns from model parameters. Evaluated on CelebA-HQ and MNIST, SISS achieves Pareto-optimal trade-offs between generation quality and forgetting strength. When applied to Stable Diffusion, it successfully suppresses memorized data for 90% of test prompts, demonstrating robust, scalable, and theoretically grounded data forgetting.

Technology Category

Application Category

📝 Abstract
Recent work has shown that diffusion models memorize and reproduce training data examples. At the same time, large copyright lawsuits and legislation such as GDPR have highlighted the need for erasing datapoints from diffusion models. However, retraining from scratch is often too expensive. This motivates the setting of data unlearning, i.e., the study of efficient techniques for unlearning specific datapoints from the training set. Existing concept unlearning techniques require an anchor prompt/class/distribution to guide unlearning, which is not available in the data unlearning setting. General-purpose machine unlearning techniques were found to be either unstable or failed to unlearn data. We therefore propose a family of new loss functions called Subtracted Importance Sampled Scores (SISS) that utilize importance sampling and are the first method to unlearn data with theoretical guarantees. SISS is constructed as a weighted combination between simpler objectives that are responsible for preserving model quality and unlearning the targeted datapoints. When evaluated on CelebA-HQ and MNIST, SISS achieved Pareto optimality along the quality and unlearning strength dimensions. On Stable Diffusion, SISS successfully mitigated memorization on nearly 90% of the prompts we tested.
Problem

Research questions and friction points this paper is trying to address.

Efficiently unlearn specific datapoints from diffusion models
Address memorization and copyright issues in diffusion models
Propose SISS for unlearning with theoretical guarantees
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes Subtracted Importance Sampled Scores (SISS)
Utilizes importance sampling for data unlearning
Achieves Pareto optimality in quality and unlearning
🔎 Similar Papers
No similar papers found.