When Safety Collides: Resolving Multi-Category Harmful Conflicts in Text-to-Image Diffusion via Adaptive Safety Guidance

๐Ÿ“… 2026-02-24
๐Ÿ“ˆ Citations: 1
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing text-to-image diffusion models are prone to "harmful conflicts" when multiple categories of harmful content coexistโ€”safety guidance intended to suppress one harmful category may inadvertently exacerbate another, leading to an overall increase in harmfulness. To address this issue, this work proposes Conflict-Aware Safety Guidance (CASG), a training-free framework that dynamically identifies the harmful category most relevant to the current generation state and applies safety guidance only along that specific direction, thereby avoiding interference among multiple categories. CASG comprises two modules: Conflict-aware Category Identification (CaCI) and Conflict-resolving Guidance Application (CrGA), and is compatible with safety mechanisms in both latent and text spaces. Evaluated on standard T2I safety benchmarks, CASG achieves state-of-the-art performance, reducing harmful generation rates by up to 15.4%.

Technology Category

Application Category

๐Ÿ“ Abstract
Text-to-Image (T2I) diffusion models have demonstrated significant advancements in generating high-quality images, while raising potential safety concerns regarding harmful content generation. Safety-guidance-based methods have been proposed to mitigate harmful outputs by steering generation away from harmful zones, where the zones are averaged across multiple harmful categories based on predefined keywords. However, these approaches fail to capture the complex interplay among different harm categories, leading to"harmful conflicts"where mitigating one type of harm may inadvertently amplify another, thus increasing overall harmful rate. To address this issue, we propose Conflict-aware Adaptive Safety Guidance (CASG), a training-free framework that dynamically identifies and applies the category-aligned safety direction during generation. CASG is composed of two components: (i) Conflict-aware Category Identification (CaCI), which identifies the harmful category most aligned with the model's evolving generative state, and (ii) Conflict-resolving Guidance Application (CrGA), which applies safety steering solely along the identified category to avoid multi-category interference. CASG can be applied to both latent-space and text-space safeguards. Experiments on T2I safety benchmarks demonstrate CASG's state-of-the-art performance, reducing the harmful rate by up to 15.4% compared to existing methods.
Problem

Research questions and friction points this paper is trying to address.

harmful conflicts
text-to-image diffusion
safety guidance
multi-category harm
harm mitigation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Conflict-aware Adaptive Safety Guidance
harmful conflicts
multi-category harm
safety steering
text-to-image diffusion
๐Ÿ”Ž Similar Papers
No similar papers found.