🤖 AI Summary
Current protected text-to-image (T2I) models lack systematic security evaluation tools. Method: This paper introduces the first large language model (LLM)-based red-teaming framework specifically designed for T2I safety assessment. It integrates supervised fine-tuning with reinforcement learning guided by a proxy T2I model, and proposes a novel multi-objective reward mechanism—jointly optimizing prompt evasiveness, toxicity, semantic coherence, and prompt diversity—to generate high-risk, stealthy adversarial prompts. Contribution/Results: The framework fills a critical methodological gap in black-box security auditing of T2I models. Empirical evaluation on mainstream commercial T2I systems demonstrates substantial improvements: a +32.7% increase in harmful image generation success rate and a +41.5% improvement in safety filter evasion rate, empirically revealing pervasive, non-trivial defensive weaknesses across deployed models.
📝 Abstract
Text-to-image (T2I) models such as Stable Diffusion have advanced rapidly and are now widely used in content creation. However, these models can be misused to generate harmful content, including nudity or violence, posing significant safety risks. While most platforms employ content moderation systems, underlying vulnerabilities can still be exploited by determined adversaries. Recent research on red-teaming and adversarial attacks against T2I models has notable limitations: some studies successfully generate highly toxic images but use adversarial prompts that are easily detected and blocked by safety filters, while others focus on bypassing safety mechanisms but fail to produce genuinely harmful outputs, neglecting the discovery of truly high-risk prompts. Consequently, there remains a lack of reliable tools for evaluating the safety of defended T2I models. To address this gap, we propose GenBreak, a framework that fine-tunes a red-team large language model (LLM) to systematically explore underlying vulnerabilities in T2I generators. Our approach combines supervised fine-tuning on curated datasets with reinforcement learning via interaction with a surrogate T2I model. By integrating multiple reward signals, we guide the LLM to craft adversarial prompts that enhance both evasion capability and image toxicity, while maintaining semantic coherence and diversity. These prompts demonstrate strong effectiveness in black-box attacks against commercial T2I generators, revealing practical and concerning safety weaknesses.