🤖 AI Summary
Text-to-image diffusion models are prone to generating harmful content—such as violent, pornographic, or unauthorized portrait imagery—yet existing concept removal methods lack standardized evaluation benchmarks and struggle to simultaneously suppress harmful concepts while preserving benign semantics.
Method: We introduce Six-CD, the first benchmark dedicated to *benign* generative concept removal, covering six sensitive concept categories (e.g., nudity, violence). It incorporates multi-dimensional automated metrics to systematically assess suppression efficacy, semantic preservation, and prompt robustness. Leveraging diffusion model architecture, we integrate adversarial prompt engineering and concept activation analysis to diagnose removal failures.
Results: Our evaluation exposes pervasive benign semantic degradation and prompt failure across mainstream removal methods—highlighting critical gaps in current approaches. Six-CD establishes a reproducible, standardized evaluation framework for safe and controllable image generation.
📝 Abstract
Text-to-image (T2I) diffusion models have shown exceptional capabilities in generating images that closely correspond to textual prompts. However, the advancement of T2I diffusion models presents significant risks, as the models could be exploited for malicious purposes, such as generating images with violence or nudity, or creating unauthorized portraits of public figures in inappropriate contexts. To mitigate these risks, concept removal methods have been proposed. These methods aim to modify diffusion models to prevent the generation of malicious and unwanted concepts. Despite these efforts, existing research faces several challenges: (1) a lack of consistent comparisons on a comprehensive dataset, (2) ineffective prompts in harmful and nudity concepts, (3) overlooked evaluation of the ability to generate the benign part within prompts containing malicious concepts. To address these gaps, we propose to benchmark the concept removal methods by introducing a new dataset, Six-CD, along with a novel evaluation metric. In this benchmark, we conduct a thorough evaluation of concept removals, with the experimental observations and discussions offering valuable insights in the field.