Growth Inhibitors for Suppressing Inappropriate Image Concepts in Diffusion Models

📅 2024-08-02
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Text-to-image diffusion models often implicitly learn NSFW content and copyrighted artistic styles from contaminated training data; such undesirable concepts are frequently triggered by semantically associated prompts rather than explicit sensitive terms. Existing fine-tuning–based mitigation methods suffer from poor generalization and catastrophic forgetting. This paper proposes the first fine-tuning–free, image-space suppression framework. We introduce a novel “Growth Inhibitor” mechanism that dynamically identifies and suppresses latent inappropriate concept representations during the diffusion process. Additionally, we design an adaptive scaling adapter enabling fine-grained, concept-aware, lossless erasure. Leveraging feature-space analysis–driven suppression injection and learnable intensity control, our method achieves state-of-the-art performance in undesirable concept removal across multiple benchmarks, while strictly preserving image fidelity and semantic consistency—without any fine-tuning and without forgetting.

Technology Category

Application Category

📝 Abstract
Despite their remarkable image generation capabilities, text-to-image diffusion models inadvertently learn inappropriate concepts from vast and unfiltered training data, which leads to various ethical and business risks. Specifically, model-generated images may exhibit not safe for work (NSFW) content and style copyright infringements. The prompts that result in these problems often do not include explicit unsafe words; instead, they contain obscure and associative terms, which are referred to as implicit unsafe prompts. Existing approaches directly fine-tune models under textual guidance to alter the cognition of the diffusion model, thereby erasing inappropriate concepts. This not only requires concept-specific fine-tuning but may also incur catastrophic forgetting. To address these issues, we explore the representation of inappropriate concepts in the image space and guide them towards more suitable ones by injecting growth inhibitors, which are tailored based on the identified features related to inappropriate concepts during the diffusion process. Additionally, due to the varying degrees and scopes of inappropriate concepts, we train an adapter to infer the corresponding suppression scale during the injection process. Our method effectively captures the manifestation of subtle words at the image level, enabling direct and efficient erasure of target concepts without the need for fine-tuning. Through extensive experimentation, we demonstrate that our approach achieves superior erasure results with little effect on other concepts while preserving image quality and semantics.
Problem

Research questions and friction points this paper is trying to address.

Suppress inappropriate image concepts
Address NSFW and copyright issues
Eliminate implicit unsafe prompts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Growth inhibitors suppress inappropriate concepts
Adapter infers suppression scale dynamically
Direct erasure without model fine-tuning
🔎 Similar Papers
No similar papers found.
D
Die Chen
East China Normal University, Shanghai, China
Zhiwen Li
Zhiwen Li
NIAID
Bioinformatics
Mingyuan Fan
Mingyuan Fan
Kunlun Inc
AIGC Semantic Segmentation
C
Cen Chen
East China Normal University, Shanghai, China; The State Key Laboratory of Blockchain and Data Security, Zhejiang University, Hangzhou, China
W
Wenmeng Zhou
Alibaba Group, Hangzhou, China
Yaliang Li
Yaliang Li
Alibaba Group
Machine Learning