🤖 AI Summary
To address security vulnerabilities in text-to-image (T2I) models—specifically their susceptibility to jailbreaking attacks that generate NSFW content—this paper proposes Perception-Guided Jailbreaking (PGJ), a large language model–driven, black-box, perception-guided jailbreaking method. PGJ operates without access to target model parameters, instead substituting hazardous terms with semantically inconsistent yet perceptually similar safe phrases, yielding highly naturalistic adversarial prompts. Its core contribution is the first establishment of a jailbreaking paradigm grounded in human visual perception similarity rather than textual semantic consistency, enabling model-agnostic, black-box, and cross-platform robust attacks. Extensive experiments across six open-source T2I models and multiple commercial services, using thousands of prompts, demonstrate that PGJ significantly improves jailbreaking success rates while achieving human naturalness scores for generated prompts comparable to those of benign text.
📝 Abstract
In recent years, Text-to-Image (T2I) models have garnered significant attention due to their remarkable advancements. However, security concerns have emerged due to their potential to generate inappropriate or Not-Safe-For-Work (NSFW) images. In this paper, inspired by the observation that texts with different semantics can lead to similar human perceptions, we propose an LLM-driven perception-guided jailbreak method, termed PGJ. It is a black-box jailbreak method that requires no specific T2I model (model-free) and generates highly natural attack prompts. Specifically, we propose identifying a safe phrase that is similar in human perception yet inconsistent in text semantics with the target unsafe word and using it as a substitution. The experiments conducted on six open-source models and commercial online services with thousands of prompts have verified the effectiveness of PGJ.