🤖 AI Summary
Insufficient safety alignment in text-to-image (T2I) models renders existing mitigation strategies—such as textual filtering or concept removal—poorly generalizable and narrowly scoped. To address this, we propose the first Direct Preference Optimization (DPO)-driven framework for T2I safety alignment. Our method introduces CoProV2, the first synthetic image-text dataset explicitly designed for safety fine-tuning; designs lightweight LoRA-based safety experts; and innovatively fuses them into a unified, plug-and-play safety module. The framework enables large-scale suppression of harmful concepts—removing up to 7× more unsafe concepts than baseline methods—while achieving state-of-the-art performance across multiple safety benchmarks. It further ensures high efficiency, scalability, and deployment readiness. Our core contributions are threefold: (1) the first DPO-based safety alignment paradigm for T2I generation, (2) the first safety-focused, T2I-specific synthetic dataset (CoProV2), and (3) a novel LoRA expert fusion mechanism tailored for multi-concept safety suppression.
📝 Abstract
Text-to-image (T2I) models have become widespread, but their limited safety guardrails expose end users to harmful content and potentially allow for model misuse. Current safety measures are typically limited to text-based filtering or concept removal strategies, able to remove just a few concepts from the model's generative capabilities. In this work, we introduce SafetyDPO, a method for safety alignment of T2I models through Direct Preference Optimization (DPO). We enable the application of DPO for safety purposes in T2I models by synthetically generating a dataset of harmful and safe image-text pairs, which we call CoProV2. Using a custom DPO strategy and this dataset, we train safety experts, in the form of low-rank adaptation (LoRA) matrices, able to guide the generation process away from specific safety-related concepts. Then, we merge the experts into a single LoRA using a novel merging strategy for optimal scaling performance. This expert-based approach enables scalability, allowing us to remove 7 times more harmful concepts from T2I models compared to baselines. SafetyDPO consistently outperforms the state-of-the-art on many benchmarks and establishes new practices for safety alignment in T2I networks. Code and data will be shared at https://safetydpo.github.io/.