🤖 AI Summary
Generative models struggle to reliably erase unacceptable concepts (e.g., copyright-infringing or offensive content) in text-to-image (T2I) synthesis, while existing concept replacement techniques (CRTs) largely fail in image-to-image (I2I) editing and neglect semantic fidelity of non-target regions during replacement. Method: We first identify a fundamental disparity in concept erasure efficacy between T2I and I2I tasks; introduce “fidelity” as a novel evaluation dimension; and propose AntiMirror—a framework leveraging targeted diffusion process intervention and adversarial concept assessment to enable precise concept replacement while preserving original semantics with high fidelity. Contribution/Results: Experiments reveal that mainstream CRTs fail to genuinely erase target concepts in I2I settings. AntiMirror significantly improves erasure effectiveness, controllability, and semantic consistency—achieving superior concept removal without compromising structural or contextual integrity across diverse benchmarks.
📝 Abstract
Generative models, particularly diffusion-based text-to-image (T2I) models, have demonstrated astounding success. However, aligning them to avoid generating content with unacceptable concepts (e.g., offensive or copyrighted content, or celebrity likenesses) remains a significant challenge. Concept replacement techniques (CRTs) aim to address this challenge, often by trying to"erase"unacceptable concepts from models. Recently, model providers have started offering image editing services which accept an image and a text prompt as input, to produce an image altered as specified by the prompt. These are known as image-to-image (I2I) models. In this paper, we first use an I2I model to empirically demonstrate that today's state-of-the-art CRTs do not in fact erase unacceptable concepts. Existing CRTs are thus likely to be ineffective in emerging I2I scenarios, despite their proven ability to remove unwanted concepts in T2I pipelines, highlighting the need to understand this discrepancy between T2I and I2I settings. Next, we argue that a good CRT, while replacing unacceptable concepts, should preserve other concepts specified in the inputs to generative models. We call this fidelity. Prior work on CRTs have neglected fidelity in the case of unacceptable concepts. Finally, we propose the use of targeted image-editing techniques to achieve both effectiveness and fidelity. We present such a technique, AntiMirror, and demonstrate its viability.