Unstable Unlearning: The Hidden Risk of Concept Resurgence in Diffusion Models

📅 2024-10-10
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a novel security vulnerability—“concept resurgence”—in text-to-image diffusion models: irrelevant image fine-tuning of a model that has undergone concept unlearning (e.g., copyrighted or sensitive content) unexpectedly reinstates the forgotten concepts, even under non-adversarial conditions. The authors formally define this phenomenon and establish a systematic experimental framework based on Stable Diffusion v1.4/v2.1 to quantitatively evaluate the stability of mainstream unlearning methods—including SFT and MEND—under subsequent fine-tuning. Results demonstrate that all evaluated approaches fail significantly, with up to 73% concept resurgence, exposing a fundamental flaw in current incremental update paradigms regarding security and behavioral consistency. This challenges the implicit assumption that unlearning is permanent and irreversible. The study provides critical empirical evidence and actionable insights for developing trustworthy, auditable diffusion model updates.

Technology Category

Application Category

📝 Abstract
Text-to-image diffusion models rely on massive, web-scale datasets. Training them from scratch is computationally expensive, and as a result, developers often prefer to make incremental updates to existing models. These updates often compose fine-tuning steps (to learn new concepts or improve model performance) with"unlearning"steps (to"forget"existing concepts, such as copyrighted works or explicit content). In this work, we demonstrate a critical and previously unknown vulnerability that arises in this paradigm: even under benign, non-adversarial conditions, fine-tuning a text-to-image diffusion model on seemingly unrelated images can cause it to"relearn"concepts that were previously"unlearned."We comprehensively investigate the causes and scope of this phenomenon, which we term concept resurgence, by performing a series of experiments which compose"concept unlearning"with subsequent fine-tuning of Stable Diffusion v1.4 and Stable Diffusion v2.1. Our findings underscore the fragility of composing incremental model updates, and raise serious new concerns about current approaches to ensuring the safety and alignment of text-to-image diffusion models.
Problem

Research questions and friction points this paper is trying to address.

Concept resurgence in diffusion models
Unstable unlearning during fine-tuning
Safety concerns in model updates
Innovation

Methods, ideas, or system contributions that make the work stand out.

Concept unlearning in models
Fine-tuning triggers relearning
Stable Diffusion vulnerability analysis
🔎 Similar Papers
No similar papers found.