Distill, Forget, Repeat: A Framework for Continual Unlearning in Text-to-Image Diffusion Models

📅 2025-12-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the stability degradation in text-to-image diffusion models under sequential data deletion requests, proposing the first generative distillation framework for continual unlearning. The method integrates continual learning principles into machine unlearning by employing multi-objective teacher-student distillation—simultaneously optimizing generation quality, semantic fidelity, and adversarial robustness—to enable progressive removal of target concepts without full model retraining. Innovatively, it introduces feature preservation constraints and adversarial evaluation to mitigate cumulative forgetting damage, thereby overcoming the catastrophic failure of single-step unlearning in sequential deletion scenarios. Evaluated on a 10-step continual unlearning benchmark, the approach significantly outperforms existing baselines: it improves target-concept forgetting rate by 32.7%, while preserving 94.1% of non-target concept generation quality and overall image fidelity.

Technology Category

Application Category

📝 Abstract
The recent rapid growth of visual generative models trained on vast web-scale datasets has created significant tension with data privacy regulations and copyright laws, such as GDPR's ``Right to be Forgotten.''This necessitates machine unlearning (MU) to remove specific concepts without the prohibitive cost of retraining. However, existing MU techniques are fundamentally ill-equipped for real-world scenarios where deletion requests arrive sequentially, a setting known as continual unlearning (CUL). Naively applying one-shot methods in a continual setting triggers a stability crisis, leading to a cascade of degradation characterized by retention collapse, compounding collateral damage to related concepts, and a sharp decline in generative quality. To address this critical challenge, we introduce a novel generative distillation based continual unlearning framework that ensures targeted and stable unlearning under sequences of deletion requests. By reframing each unlearning step as a multi-objective, teacher-student distillation process, the framework leverages principles from continual learning to maintain model integrity. Experiments on a 10-step sequential benchmark demonstrate that our method unlearns forget concepts with better fidelity and achieves this without significant interference to the performance on retain concepts or the overall image quality, substantially outperforming baselines. This framework provides a viable pathway for the responsible deployment and maintenance of large-scale generative models, enabling industries to comply with ongoing data removal requests in a practical and effective manner.
Problem

Research questions and friction points this paper is trying to address.

Enables sequential removal of specific concepts from diffusion models
Prevents model degradation during continual unlearning processes
Maintains generative quality while complying with data privacy regulations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generative distillation for stable continual unlearning
Multi-objective teacher-student framework for sequential deletions
Preserves model integrity without retraining or quality loss
🔎 Similar Papers
No similar papers found.