Chain-of-Jailbreak Attack for Image Generation Models via Editing Step by Step

📅 2024-10-04
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates security vulnerabilities in text-to-image generation models (e.g., Stable Diffusion, DALL-E 3) and proposes rigorous evaluation and defense mechanisms. Motivated by the susceptibility of existing safety filters to adversarial bypass, we introduce Chain-of-Jailbreak (CoJ), a novel attack paradigm that decomposes malicious prompts into multi-step iterative editing instructions to implicitly evade content moderation. Our contributions are threefold: (1) the first stepwise editing-based jailbreaking framework; (2) CoJ-Bench, the first comprehensive benchmark covering nine categories of safety risks; and (3) Think Twice Prompting, a reasoning-augmented defense method that improves prompt-level safety verification. Extensive experiments demonstrate that CoJ achieves an average jailbreaking success rate exceeding 60% across four major multimodal services—significantly outperforming baseline attacks. Meanwhile, Think Twice Prompting attains over 95% defense success rate. All code and datasets are publicly released.

Technology Category

Application Category

📝 Abstract
Text-based image generation models, such as Stable Diffusion and DALL-E 3, hold significant potential in content creation and publishing workflows, making them the focus in recent years. Despite their remarkable capability to generate diverse and vivid images, considerable efforts are being made to prevent the generation of harmful content, such as abusive, violent, or pornographic material. To assess the safety of existing models, we introduce a novel jailbreaking method called Chain-of-Jailbreak (CoJ) attack, which compromises image generation models through a step-by-step editing process. Specifically, for malicious queries that cannot bypass the safeguards with a single prompt, we intentionally decompose the query into multiple sub-queries. The image generation models are then prompted to generate and iteratively edit images based on these sub-queries. To evaluate the effectiveness of our CoJ attack method, we constructed a comprehensive dataset, CoJ-Bench, encompassing nine safety scenarios, three types of editing operations, and three editing elements. Experiments on four widely-used image generation services provided by GPT-4V, GPT-4o, Gemini 1.5 and Gemini 1.5 Pro, demonstrate that our CoJ attack method can successfully bypass the safeguards of models for over 60% cases, which significantly outperforms other jailbreaking methods (i.e., 14%). Further, to enhance these models' safety against our CoJ attack method, we also propose an effective prompting-based method, Think Twice Prompting, that can successfully defend over 95% of CoJ attack. We release our dataset and code to facilitate the AI safety research.
Problem

Research questions and friction points this paper is trying to address.

Assessing safety of image generation models against harmful content
Introducing Chain-of-Jailbreak attack to bypass model safeguards
Proposing defense method to counter step-by-step editing attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Step-by-step editing for jailbreak attacks
Decomposing queries into sub-queries iteratively
Prompting-based defense with Think Twice
🔎 Similar Papers
No similar papers found.
W
Wenxuan Wang
Tencent AI Lab
K
Kuiyi Gao
The Chinese University of Hong Kong
Z
Zihan Jia
Tencent AI Lab
Y
Youliang Yuan
The Chinese University of Hong Kong, Shenzhen
Jen-Tse Huang
Jen-Tse Huang
Johns Hopkins University
Artificial IntelligenceNatural Language ProcessingLarge Language Models
Qiuzhi Liu
Qiuzhi Liu
AI Lab, Tencent
S
Shuai Wang
The Hong Kong University of Science and Technology
W
Wenxiang Jiao
Tencent AI Lab
Zhaopeng Tu
Zhaopeng Tu
Tech Lead @ Tencent Digital Human
Digital HumanAgentsLarge Language ModelsMachine Translation