ImgTrojan: Jailbreaking Vision-Language Models with ONE Image

📅 2024-03-05
🏛️ arXiv.org
📈 Citations: 14
Influential: 2
📄 PDF
🤖 AI Summary
This paper addresses the vulnerability of visual language models (VLMs) to jailbreaking attacks that bypass safety mechanisms. We propose ImgTrojan—the first data-poisoning attack enabling VLM jailbreaking using only a single malicious image and its corresponding tampered text prompt. By poisoning (image, text) pairs during training, ImgTrojan exploits misalignments in vision-language representation to elicit harmful outputs. We systematically investigate how poisoning ratio and the location of trainable parameters affect attack efficacy. Our contributions include: (1) the first single-image-driven VLM jailbreaking method; (2) the first dedicated VLM jailbreaking benchmark; and (3) a dual-metric evaluation framework balancing attack success rate and stealthiness. Evaluated on multiple state-of-the-art VLMs, ImgTrojan achieves up to 85.2% jailbreaking success rate—substantially outperforming existing baselines—and exposes critical weaknesses in multimodal safety defenses.

Technology Category

Application Category

📝 Abstract
There has been an increasing interest in the alignment of large language models (LLMs) with human values. However, the safety issues of their integration with a vision module, or vision language models (VLMs), remain relatively underexplored. In this paper, we propose a novel jailbreaking attack against VLMs, aiming to bypass their safety barrier when a user inputs harmful instructions. A scenario where our poisoned (image, text) data pairs are included in the training data is assumed. By replacing the original textual captions with malicious jailbreak prompts, our method can perform jailbreak attacks with the poisoned images. Moreover, we analyze the effect of poison ratios and positions of trainable parameters on our attack's success rate. For evaluation, we design two metrics to quantify the success rate and the stealthiness of our attack. Together with a list of curated harmful instructions, a benchmark for measuring attack efficacy is provided. We demonstrate the efficacy of our attack by comparing it with baseline methods.
Problem

Research questions and friction points this paper is trying to address.

Jailbreaking Vision-Language Models
Safety Issues in VLMs
Poisoned Data Pair Attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Jailbreaking VLMs with poisoned images
Replacing captions with malicious prompts
Analyzing poison ratios and parameter positions
X
Xijia Tao
The University of Hong Kong
S
Shuai Zhong
The University of Hong Kong
L
Lei Li
The University of Hong Kong
Q
Qi Liu
The University of Hong Kong
Lingpeng Kong
Lingpeng Kong
Google DeepMind, The University of Hong Kong
Natural Language ProcessingMachine Learning