Why Instruction-Based Unlearning Fails in Diffusion Models?

📅 2026-04-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates why relying solely on natural language instructions is insufficient for effectively unlearning target concepts in diffusion models. Through controlled experiments that integrate CLIP text encoder analysis with dynamic tracking of cross-attention mechanisms during the denoising process, the work reveals a fundamental limitation of instruction-level unlearning: representations of the target concept persist throughout the generation pipeline, and textual prompts fail to adequately suppress their activation during inference. These findings demonstrate that achieving effective concept erasure requires moving beyond mere language-based control and necessitates more profound intervention strategies that directly modify internal model representations or dynamics.
📝 Abstract
Instruction-based unlearning has proven effective for modifying the behavior of large language models at inference time, but whether this paradigm extends to other generative models remains unclear. In this work, we investigate instruction-based unlearning in diffusion-based image generation models and show, through controlled experiments across multiple concepts and prompt variants, that diffusion models systematically fail to suppress targeted concepts when guided solely by natural-language unlearning instructions. By analyzing both the CLIP text encoder and cross-attention dynamics during the denoising process, we find that unlearning instructions do not induce sustained reductions in attention to the targeted concept tokens, causing the targeted concept representations to persist throughout generation. These results reveal a fundamental limitation of prompt-level instruction in diffusion models and suggest that effective unlearning requires interventions beyond inference-time language control.
Problem

Research questions and friction points this paper is trying to address.

instruction-based unlearning
diffusion models
concept suppression
text-to-image generation
unlearning failure
Innovation

Methods, ideas, or system contributions that make the work stand out.

instruction-based unlearning
diffusion models
cross-attention dynamics
concept suppression
CLIP text encoder
🔎 Similar Papers
No similar papers found.