Contextual Image Attack: How Visual Context Exposes Multimodal Safety Vulnerabilities

📅 2025-12-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing jailbreaking attacks predominantly target text-dominated multimodal interactions, overlooking the unique security risks posed by image modalities that encode complex contextual information. To address this gap, we propose CIA (Context-aware Image-centered Attack), the first image-centric jailbreaking framework that treats images as the primary attack vector. CIA employs a multi-agent collaborative architecture integrating context-aware image generation, vision-based toxic semantic steganography, and automated optimization, augmented by four novel visualization strategies to enhance stealth. Evaluated on MMSafetyBench-tiny, CIA achieves toxicity scores of 4.73 and 4.83 against GPT-4o and Qwen2.5-VL-72B, respectively, with success rates of 86.31% and 91.07%—substantially outperforming prior methods. Our work systematically exposes deep vulnerabilities in multimodal large language models’ visual safety alignment.

Technology Category

Application Category

📝 Abstract
While Multimodal Large Language Models (MLLMs) show remarkable capabilities, their safety alignments are susceptible to jailbreak attacks. Existing attack methods typically focus on text-image interplay, treating the visual modality as a secondary prompt. This approach underutilizes the unique potential of images to carry complex, contextual information. To address this gap, we propose a new image-centric attack method, Contextual Image Attack (CIA), which employs a multi-agent system to subtly embeds harmful queries into seemingly benign visual contexts using four distinct visualization strategies. To further enhance the attack's efficacy, the system incorporate contextual element enhancement and automatic toxicity obfuscation techniques. Experimental results on the MMSafetyBench-tiny dataset show that CIA achieves high toxicity scores of 4.73 and 4.83 against the GPT-4o and Qwen2.5-VL-72B models, respectively, with Attack Success Rates (ASR) reaching 86.31% and 91.07%. Our method significantly outperforms prior work, demonstrating that the visual modality itself is a potent vector for jailbreaking advanced MLLMs.
Problem

Research questions and friction points this paper is trying to address.

Exploiting visual context to bypass multimodal model safety
Embedding harmful queries into benign images to attack MLLMs
Demonstrating images as a potent vector for jailbreaking AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent system embeds harmful queries into benign visual contexts
Uses four distinct visualization strategies to exploit visual modality
Incorporates contextual enhancement and automatic toxicity obfuscation techniques
🔎 Similar Papers
Yuan Xiong
Yuan Xiong
Beihang University
flow diagnostic and control
Z
Ziqi Miao
Shanghai Artificial Intelligence Laboratory
L
Lijun Li
Shanghai Artificial Intelligence Laboratory
C
Chen Qian
Shanghai Artificial Intelligence Laboratory, Renmin University of China
J
Jie Li
Shanghai Artificial Intelligence Laboratory
Jing Shao
Jing Shao
Research Scientist, Shanghai AI Laboratory/Shanghai Jiao Tong University
Computer VisionMulti-Modal Large Language Model