VRSA: Jailbreaking Multimodal Large Language Models through Visual Reasoning Sequential Attack

📅 2025-12-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal large language models (MLLMs) exhibit significant security vulnerabilities in visual reasoning tasks, where adversaries can stealthily induce harmful outputs—a risk insufficiently addressed by prior work. Method: This paper presents the first systematic investigation of jailbreaking risks along visual reasoning pathways, proposing “Visual Reasoning Sequence Attack” (VRSA). VRSA decomposes malicious textual semantics into coherent sequences of sub-images and employs adaptive scene optimization, context-aware semantic rewriting, and image-text consistency alignment to construct highly evasive attack chains. By avoiding single-image triggers and exploiting MLLMs’ stepwise reasoning capability, VRSA progressively aggregates adversarial intent across the visual reasoning process. Contribution/Results: Evaluated on state-of-the-art open- and closed-source MLLMs—including GPT-4o and Claude-3.5-Sonnet—VRSA achieves substantially higher attack success rates than existing methods, empirically exposing a critical security flaw in the visual reasoning channel of MLLMs.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) are widely used in various fields due to their powerful cross-modal comprehension and generation capabilities. However, more modalities bring more vulnerabilities to being utilized for jailbreak attacks, which induces MLLMs to output harmful content. Due to the strong reasoning ability of MLLMs, previous jailbreak attacks try to explore reasoning safety risk in text modal, while similar threats have been largely overlooked in the visual modal. To fully evaluate potential safety risks in the visual reasoning task, we propose Visual Reasoning Sequential Attack (VRSA), which induces MLLMs to gradually externalize and aggregate complete harmful intent by decomposing the original harmful text into several sequentially related sub-images. In particular, to enhance the rationality of the scene in the image sequence, we propose Adaptive Scene Refinement to optimize the scene most relevant to the original harmful query. To ensure the semantic continuity of the generated image, we propose Semantic Coherent Completion to iteratively rewrite each sub-text combined with contextual information in this scene. In addition, we propose Text-Image Consistency Alignment to keep the semantical consistency. A series of experiments demonstrates that the VRSA can achieve a higher attack success rate compared with the state-of-the-art jailbreak attack methods on both the open-source and closed-source MLLMs such as GPT-4o and Claude-4.5-Sonnet.
Problem

Research questions and friction points this paper is trying to address.

Addresses visual reasoning safety risks in MLLMs
Proposes sequential image-based jailbreak attack method
Enhances attack success rate on open and closed-source models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decomposes harmful text into sequential sub-images for attack
Uses adaptive scene refinement to optimize image relevance
Ensures semantic continuity with iterative text rewriting
🔎 Similar Papers
No similar papers found.
Shiji Zhao
Shiji Zhao
Beihang University
Machine LearningTrustworthy AIExplainable AIRobust AI
S
Shukun Xiong
Institute of Artificial Intelligence, Beihang University, Beijing, China
Yao Huang
Yao Huang
Institute of Artificial Intelligence, Beihang University
Trustworthy MLMultimodal Learning
J
Jin Yan
Institute of Artificial Intelligence, Beihang University, Beijing, China
Z
Zhenyu Wu
Institute of Artificial Intelligence, Beihang University, Beijing, China
Jiyang Guan
Jiyang Guan
Institute of Automation, Chinese Academy of Sciences
AI Safety
Ranjie Duan
Ranjie Duan
Alibaba Group
AIAI 安全AI推动共同富裕
Jialing Tao
Jialing Tao
Alibaba
H
Hui Xue
Security Department, Alibaba Group, Hangzhou, China
Xingxing Wei
Xingxing Wei
Professor of Artificial Intelligence, Beihang University
Computer visionAdversarial machine learning