π€ AI Summary
Text-to-image (T2I) models pose risks of generating harmful content, yet existing black-box red-teaming methods rely on prior knowledge of defense mechanisms and thus struggle to adapt to diverse, unknown commercial APIs. This paper proposes RPG-RT, an LLM-driven, iterative red-teaming framework that requires no white-box access or assumptions about underlying defenses. Its core innovation is rule-based preference modeling: it translates coarse-grained safety feedback (e.g., βrefusalβ) into fine-grained, actionable prompt optimization signals. Integrated with reinforcement feedback loops and systematic prompt engineering, RPG-RT enables dynamic, self-adaptive attacks. Evaluated across 19 open-source T2I models, 3 commercial APIs, and text-to-video (T2V) models, RPG-RT achieves significantly higher attack success rates than state-of-the-art black-box methods, demonstrating strong generalization and practical deployability.
π Abstract
Text-to-image (T2I) models raise ethical and safety concerns due to their potential to generate inappropriate or harmful images. Evaluating these models' security through red-teaming is vital, yet white-box approaches are limited by their need for internal access, complicating their use with closed-source models. Moreover, existing black-box methods often assume knowledge about the model's specific defense mechanisms, limiting their utility in real-world commercial API scenarios. A significant challenge is how to evade unknown and diverse defense mechanisms. To overcome this difficulty, we propose a novel Rule-based Preference modeling Guided Red-Teaming (RPG-RT), which iteratively employs LLM to modify prompts to query and leverages feedback from T2I systems for fine-tuning the LLM. RPG-RT treats the feedback from each iteration as a prior, enabling the LLM to dynamically adapt to unknown defense mechanisms. Given that the feedback is often labeled and coarse-grained, making it difficult to utilize directly, we further propose rule-based preference modeling, which employs a set of rules to evaluate desired or undesired feedback, facilitating finer-grained control over the LLM's dynamic adaptation process. Extensive experiments on nineteen T2I systems with varied safety mechanisms, three online commercial API services, and T2V models verify the superiority and practicality of our approach.