๐ค AI Summary
Existing text-to-image diffusion models are prone to "harmful conflicts" when multiple categories of harmful content coexistโsafety guidance intended to suppress one harmful category may inadvertently exacerbate another, leading to an overall increase in harmfulness. To address this issue, this work proposes Conflict-Aware Safety Guidance (CASG), a training-free framework that dynamically identifies the harmful category most relevant to the current generation state and applies safety guidance only along that specific direction, thereby avoiding interference among multiple categories. CASG comprises two modules: Conflict-aware Category Identification (CaCI) and Conflict-resolving Guidance Application (CrGA), and is compatible with safety mechanisms in both latent and text spaces. Evaluated on standard T2I safety benchmarks, CASG achieves state-of-the-art performance, reducing harmful generation rates by up to 15.4%.
๐ Abstract
Text-to-Image (T2I) diffusion models have demonstrated significant advancements in generating high-quality images, while raising potential safety concerns regarding harmful content generation. Safety-guidance-based methods have been proposed to mitigate harmful outputs by steering generation away from harmful zones, where the zones are averaged across multiple harmful categories based on predefined keywords. However, these approaches fail to capture the complex interplay among different harm categories, leading to"harmful conflicts"where mitigating one type of harm may inadvertently amplify another, thus increasing overall harmful rate. To address this issue, we propose Conflict-aware Adaptive Safety Guidance (CASG), a training-free framework that dynamically identifies and applies the category-aligned safety direction during generation. CASG is composed of two components: (i) Conflict-aware Category Identification (CaCI), which identifies the harmful category most aligned with the model's evolving generative state, and (ii) Conflict-resolving Guidance Application (CrGA), which applies safety steering solely along the identified category to avoid multi-category interference. CASG can be applied to both latent-space and text-space safeguards. Experiments on T2I safety benchmarks demonstrate CASG's state-of-the-art performance, reducing the harmful rate by up to 15.4% compared to existing methods.