๐ค AI Summary
In text-to-image (T2I) generation, NSFW content detection faces a fundamental trade-off: text-based filters are vulnerable to adversarial attacks and ignore model behavior, while image-based filters incur high computational overhead and latency. This work proposes a real-time, in-diffusion safety detection framework that, for the first time, leverages the pre-trained cross-attention mechanism of the U-Net backbone to fuse prompt embeddings with intermediate denoising features during early diffusion stepsโenabling multimodal safety alignment analysis. The method requires no additional parameters or auxiliary models, fully reusing the existing diffusion architecture to balance accuracy and efficiency. Evaluated across multiple benchmarks, it significantly outperforms pure text filters and approaches the accuracy of image-level detectors, while reducing computational cost and inference latency by over 90%. By shifting safety enforcement from post-hoc filtering to intrinsic, step-aware diffusion dynamics, our approach establishes a lightweight, robust, and deployable endogenous safety mechanism for T2I systems.
๐ Abstract
Text-to-Image (T2I) generation is a popular AI-generated content (AIGC) technology enabling diverse and creative image synthesis. However, some outputs may contain Not Safe For Work (NSFW) content (e.g., violence), violating community guidelines. Detecting NSFW content efficiently and accurately, known as external safeguarding, is essential. Existing external safeguards fall into two types: text filters, which analyze user prompts but overlook T2I model-specific variations and are prone to adversarial attacks; and image filters, which analyze final generated images but are computationally costly and introduce latency. Diffusion models, the foundation of modern T2I systems like Stable Diffusion, generate images through iterative denoising using a U-Net architecture with ResNet and Transformer blocks. We observe that: (1) early denoising steps define the semantic layout of the image, and (2) cross-attention layers in U-Net are crucial for aligning text and image regions. Based on these insights, we propose Wukong, a transformer-based NSFW detection framework that leverages intermediate outputs from early denoising steps and reuses U-Net's pre-trained cross-attention parameters. Wukong operates within the diffusion process, enabling early detection without waiting for full image generation. We also introduce a new dataset containing prompts, seeds, and image-specific NSFW labels, and evaluate Wukong on this and two public benchmarks. Results show that Wukong significantly outperforms text-based safeguards and achieves comparable accuracy of image filters, while offering much greater efficiency.