🤖 AI Summary
Existing methods struggle to reliably erase broad and ambiguous abstract concepts—such as “sexual innuendo” or “violence”—from diffusion models, posing significant safety risks in generated content. This work proposes a concept prototype–guided mechanism grounded in the geometric structure of the embedding space: it identifies prototypical representations of target concepts via clustering and employs them as negative conditioning signals during inference to achieve precise erasure. By leveraging concept prototypes to steer controllable generation in diffusion models—a first in the field—this approach substantially improves erasure efficacy for broad concepts across multiple benchmarks while preserving overall image quality, thereby enhancing both safety and controllability in text-to-image synthesis.
📝 Abstract
Concept erasure is extensively utilized in image generation to prevent text-to-image models from generating undesired content. Existing methods can effectively erase narrow concepts that are specific and concrete, such as distinct intellectual properties (e.g. Pikachu) or recognizable characters (e.g. Elon Musk). However, their performance degrades on broad concepts such as ``sexual''or ``violent'', whose wide scope and multi-faceted nature make them difficult to erase reliably. To overcome this limitation, we exploit the model's intrinsic embedding geometry to identify latent embeddings that encode a given concept. By clustering these embeddings, we derive a set of concept prototypes that summarize the model's internal representations of the concept, and employ them as negative conditioning signals during inference to achieve precise and reliable erasure. Extensive experiments across multiple benchmarks show that our approach achieves substantially more reliable removal of broad concepts while preserving overall image quality, marking a step towards safer and more controllable image generation.