DTVI: Dual-Stage Textual and Visual Intervention for Safe Text-to-Image Generation

📅 2026-03-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of text-to-image diffusion models to prompts containing distributed malicious semantics or adversarial perturbations, which can induce the generation of unsafe content—a challenge inadequately mitigated by existing inference-time defenses. To counter this, the authors propose a two-stage inference-time defense framework: first, a class-aware, sequence-level intervention mechanism operates at the text embedding stage to precisely identify and suppress harmful semantics; second, a visual refinement stage further attenuates residual risks during image generation. Evaluated across seven categories of harmful content, the method achieves an average defense success rate of 88.56%, reaching 94.43% for sexually explicit material, while preserving high-fidelity image generation on benign prompts—substantially outperforming current state-of-the-art approaches.

Technology Category

Application Category

📝 Abstract
Text-to-Image (T2I) diffusion models have demonstrated strong generation ability, but their potential to generate unsafe content raises significant safety concerns. Existing inference-time defense methods typically perform category-agnostic token-level intervention in the text embedding space, which fails to capture malicious semantics distributed across the full token sequence and remains vulnerable to adversarial prompts. In this paper, we propose DTVI, a dual-stage inference-time defense framework for safe T2I generation. Unlike existing methods that intervene on specific token embeddings, our method introduces category-aware sequence-level intervention on the full prompt embedding to better capture distributed malicious semantics, and further attenuates the remaining unsafe influences during the visual generation stage. Experimental results on real-world unsafe prompts, adversarial prompts, and multiple harmful categories show that our method achieves effective and robust defense while preserving reasonable generation quality on benign prompts, obtaining an average Defense Success Rate (DSR) of 94.43% across sexual-category benchmarks and 88.56 across seven unsafe categories, while maintaining generation quality on benign prompts.
Problem

Research questions and friction points this paper is trying to address.

Text-to-Image generation
unsafe content
adversarial prompts
diffusion models
safety concerns
Innovation

Methods, ideas, or system contributions that make the work stand out.

dual-stage intervention
category-aware sequence-level intervention
text-to-image safety
adversarial prompt defense
diffusion model security
🔎 Similar Papers
No similar papers found.