Perception-guided Jailbreak against Text-to-Image Models

📅 2024-08-20
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
To address security vulnerabilities in text-to-image (T2I) models—specifically their susceptibility to jailbreaking attacks that generate NSFW content—this paper proposes Perception-Guided Jailbreaking (PGJ), a large language model–driven, black-box, perception-guided jailbreaking method. PGJ operates without access to target model parameters, instead substituting hazardous terms with semantically inconsistent yet perceptually similar safe phrases, yielding highly naturalistic adversarial prompts. Its core contribution is the first establishment of a jailbreaking paradigm grounded in human visual perception similarity rather than textual semantic consistency, enabling model-agnostic, black-box, and cross-platform robust attacks. Extensive experiments across six open-source T2I models and multiple commercial services, using thousands of prompts, demonstrate that PGJ significantly improves jailbreaking success rates while achieving human naturalness scores for generated prompts comparable to those of benign text.

Technology Category

Application Category

📝 Abstract
In recent years, Text-to-Image (T2I) models have garnered significant attention due to their remarkable advancements. However, security concerns have emerged due to their potential to generate inappropriate or Not-Safe-For-Work (NSFW) images. In this paper, inspired by the observation that texts with different semantics can lead to similar human perceptions, we propose an LLM-driven perception-guided jailbreak method, termed PGJ. It is a black-box jailbreak method that requires no specific T2I model (model-free) and generates highly natural attack prompts. Specifically, we propose identifying a safe phrase that is similar in human perception yet inconsistent in text semantics with the target unsafe word and using it as a substitution. The experiments conducted on six open-source models and commercial online services with thousands of prompts have verified the effectiveness of PGJ.
Problem

Research questions and friction points this paper is trying to address.

Image-to-Text Conversion
Content Appropriateness
Model Safety
Innovation

Methods, ideas, or system contributions that make the work stand out.

PGJ method
Perceptual Similarity
Text-to-Image Safety
Y
Yihao Huang
Nanyang Technological University, Singapore
Le Liang
Le Liang
Southeast University
Wireless CommunicationsMachine Learning
Tianlin Li
Tianlin Li
Nanyang Technological University
AI4SESE4AITrustworthy AI
Xiaojun Jia
Xiaojun Jia
Nanyang Technological University
Explainable AIRobust AIEfficient AI
Run Wang
Run Wang
Integrated Systems Laboratory (IIS), ETHz
Hardware/Software Co-designTinyML
W
Weikai Miao
East China Normal University, China
G
G. Pu
East China Normal University, China
Y
Yang Liu
Nanyang Technological University, Singapore