Jailbreaking Prompt Attack: A Controllable Adversarial Attack against Diffusion Models

πŸ“… 2024-04-02
πŸ›οΈ North American Chapter of the Association for Computational Linguistics
πŸ“ˆ Citations: 30
✨ Influential: 7
πŸ“„ PDF
πŸ€– AI Summary
Text-to-image (T2I) models face growing security risks from adversarial prompt attacks that bypass built-in safety filters. Method: We propose a black-box, model-agnostic prompt attack requiring no access to the target model’s internals. Our approach is the first to uncover and exploit implicit NSFW semantic structures within pretrained text encoder embedding spaces. To this end, we design a differentiable optimization framework over a discrete vocabulary space, integrating soft token allocation, gradient masking, and text-embedding projection alignment to generate malicious prompts efficiently, controllably, and with high semantic fidelity. Attack generation is fully automated using only public APIs and ChatGPT-assisted antonym-guided search. Contribution/Results: Our method successfully evades dual-modality safety filters (text and image) in Stable Diffusion, DALLΒ·E 2, and Midjourney, achieving high success rates with low computational overhead. It establishes a novel, practical paradigm for evaluating the robustness of T2I safety mechanisms.

Technology Category

Application Category

πŸ“ Abstract
Text-to-image (T2I) models can be maliciously used to generate harmful content such as sexually explicit, unfaithful, and misleading or Not-Safe-for-Work (NSFW) images. Previous attacks largely depend on the availability of the diffusion model or involve a lengthy optimization process. In this work, we investigate a more practical and universal attack that does not require the presence of a target model and demonstrate that the high-dimensional text embedding space inherently contains NSFW concepts that can be exploited to generate harmful images. We present the Jailbreaking Prompt Attack (JPA). JPA first searches for the target malicious concepts in the text embedding space using a group of antonyms generated by ChatGPT. Subsequently, a prefix prompt is optimized in the discrete vocabulary space to align malicious concepts semantically in the text embedding space. We further introduce a soft assignment with gradient masking technique that allows us to perform gradient ascent in the discrete vocabulary space. We perform extensive experiments with open-sourced T2I models, e.g. stable-diffusion-v1-4 and closed-sourced online services, e.g. DALLE2, Midjourney with black-box safety checkers. Results show that (1) JPA bypasses both text and image safety checkers (2) while preserving high semantic alignment with the target prompt. (3) JPA demonstrates a much faster speed than previous methods and can be executed in a fully automated manner. These merits render it a valuable tool for robustness evaluation in future text-to-image generation research.
Problem

Research questions and friction points this paper is trying to address.

Exploiting text embedding space to generate harmful images
Bypassing safety checkers in text-to-image models
Automating adversarial attacks for robustness evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Search malicious concepts using ChatGPT antonyms
Optimize prefix prompt in discrete vocabulary space
Use gradient masking for discrete space ascent
πŸ”Ž Similar Papers
No similar papers found.
Jiachen Ma
Jiachen Ma
Heilongjiang University, Shaanxi Normal University
Graph Contrastive LearningRumor DetectionIntelligent Education
A
Anda Cao
Zhejiang University
Zhiqing Xiao
Zhiqing Xiao
Zhejiang University
MLCVGNN
J
Jie Zhang
ETH Zurich
C
Chaonan Ye
Zhejiang University
J
Junbo Zhao
Zhejiang University