Forget Many, Forget Right: Scalable and Precise Concept Unlearning in Diffusion Models

📅 2026-01-06
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the scalability bottlenecks in large-scale multi-concept unlearning for diffusion models, which include weight conflicts, inadvertent removal of semantically similar content, and reliance on auxiliary data. To overcome these challenges, the authors propose ScaPre, a framework that enables efficient and precise unlearning without requiring additional data by leveraging conflict-aware parameter decoupling and a stability-preserving optimization mechanism. Key innovations include spectral trace regularization and geometric alignment to stabilize training, along with an adaptive decoupler guided by the InfoMax principle that strictly confines unlearning to the target subspace. Experiments demonstrate that ScaPre can simultaneously forget over five times more concepts—spanning objects, artistic styles, and sensitive content—than current state-of-the-art methods while preserving high generation quality, thereby achieving superior performance in both accuracy and scalability.

Technology Category

Application Category

📝 Abstract
Text-to-image diffusion models have achieved remarkable progress, yet their use raises copyright and misuse concerns, prompting research into machine unlearning. However, extending multi-concept unlearning to large-scale scenarios remains difficult due to three challenges: (i) conflicting weight updates that hinder unlearning or degrade generation; (ii) imprecise mechanisms that cause collateral damage to similar content; and (iii) reliance on additional data or modules, creating scalability bottlenecks. To address these, we propose Scalable-Precise Concept Unlearning (ScaPre), a unified framework tailored for large-scale unlearning. ScaPre introduces a conflict-aware stable design, integrating spectral trace regularization and geometry alignment to stabilize optimization, suppress conflicts, and preserve global structure. Furthermore, an Informax Decoupler identifies concept-relevant parameters and adaptively reweights updates, strictly confining unlearning to the target subspace. ScaPre yields an efficient closed-form solution without requiring auxiliary data or sub-models. Comprehensive experiments on objects, styles, and explicit content demonstrate that ScaPre effectively removes target concepts while maintaining generation quality. It forgets up to $\times \mathbf{5}$ more concepts than the best baseline within acceptable quality limits, achieving state-of-the-art precision and efficiency for large-scale unlearning.
Problem

Research questions and friction points this paper is trying to address.

concept unlearning
diffusion models
scalability
precision
machine unlearning
Innovation

Methods, ideas, or system contributions that make the work stand out.

concept unlearning
diffusion models
scalable unlearning
conflict-aware optimization
Informax Decoupler
🔎 Similar Papers
No similar papers found.