Jailbreaks on Vision Language Model via Multimodal Reasoning

📅 2026-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes an adaptive attack method that exploits the vulnerability of existing vision-language models (VLMs) to prompt perturbations, which can circumvent their safety alignment mechanisms. By integrating Chain-of-Thought (CoT) prompting with the ReAct reasoning framework, the approach constructs stealthy jailbreaking prompts through CoT and iteratively generates semantically consistent adversarial image noise guided by ReAct-based feedback. Evaluated in the post-training phase, this method achieves a high attack success rate (ASR) while preserving the naturalness and inconspicuousness of outputs across both textual and visual modalities. It substantially outperforms current jailbreaking strategies in terms of effectiveness and stealth.

Technology Category

Application Category

📝 Abstract
Vision-language models (VLMs) have become central to tasks such as visual question answering, image captioning, and text-to-image generation. However, their outputs are highly sensitive to prompt variations, which can reveal vulnerabilities in safety alignment. In this work, we present a jailbreak framework that exploits post-training Chain-of-Thought (CoT) prompting to construct stealthy prompts capable of bypassing safety filters. To further increase attack success rates (ASR), we propose a ReAct-driven adaptive noising mechanism that iteratively perturbs input images based on model feedback. This approach leverages the ReAct paradigm to refine adversarial noise in regions most likely to activate safety defenses, thereby enhancing stealth and evasion. Experimental results demonstrate that the proposed dual-strategy significantly improves ASR while maintaining naturalness in both text and visual domains.
Problem

Research questions and friction points this paper is trying to address.

Vision-language models
jailbreak
safety alignment
prompt sensitivity
adversarial attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

jailbreak
vision-language model
Chain-of-Thought prompting
ReAct
adaptive noising
🔎 Similar Papers
No similar papers found.
A
Aarush Noheria
Novi High School, MI, USA
Yuguang Yao
Yuguang Yao
Intuit
LLMAgent