🤖 AI Summary
To address the need for efficient erasure of sensitive concepts (e.g., copyrighted or privacy-sensitive content) in large-scale text-to-image diffusion models, this paper proposes a fine-tuning-free, quality-preserving concept-editing erasure framework. Methodologically, it introduces a novel tripartite mechanism—Influence-Aware Prior Filtering (IPF), Directional Prior Augmentation (DPA), and Invariance Equality Constraints (IEC)—implemented via constrained optimization within the model’s parameter nullspace. The framework guarantees complete removal of target concepts while strictly preserving generation fidelity and diversity for non-target concepts. Experiments demonstrate that our approach significantly outperforms existing methods across multi-task benchmarks; it enables parallel, high-fidelity erasure of up to 100 concepts in just five seconds per inference, with zero degradation to non-target image quality. The method thus achieves an unprecedented balance of efficiency, scalability, and practical applicability.
📝 Abstract
Erasing concepts from large-scale text-to-image (T2I) diffusion models has become increasingly crucial due to the growing concerns over copyright infringement, offensive content, and privacy violations. However, existing methods either require costly fine-tuning or degrade image quality for non-target concepts (i.e., prior) due to inherent optimization limitations. In this paper, we introduce SPEED, a model editing-based concept erasure approach that leverages null-space constraints for scalable, precise, and efficient erasure. Specifically, SPEED incorporates Influence-based Prior Filtering (IPF) to retain the most affected non-target concepts during erasing, Directed Prior Augmentation (DPA) to expand prior coverage while maintaining semantic consistency, and Invariant Equality Constraints (IEC) to regularize model editing by explicitly preserving key invariants during the T2I generation process. Extensive evaluations across multiple concept erasure tasks demonstrate that SPEED consistently outperforms existing methods in prior preservation while achieving efficient and high-fidelity concept erasure, successfully removing 100 concepts within just 5 seconds. Our code and models are available at: https://github.com/Ouxiang-Li/SPEED.