🤖 AI Summary
Multimodal large language models (MLLMs) exhibit significant security vulnerabilities in visual reasoning tasks, where adversaries can stealthily induce harmful outputs—a risk insufficiently addressed by prior work.
Method: This paper presents the first systematic investigation of jailbreaking risks along visual reasoning pathways, proposing “Visual Reasoning Sequence Attack” (VRSA). VRSA decomposes malicious textual semantics into coherent sequences of sub-images and employs adaptive scene optimization, context-aware semantic rewriting, and image-text consistency alignment to construct highly evasive attack chains. By avoiding single-image triggers and exploiting MLLMs’ stepwise reasoning capability, VRSA progressively aggregates adversarial intent across the visual reasoning process.
Contribution/Results: Evaluated on state-of-the-art open- and closed-source MLLMs—including GPT-4o and Claude-3.5-Sonnet—VRSA achieves substantially higher attack success rates than existing methods, empirically exposing a critical security flaw in the visual reasoning channel of MLLMs.
📝 Abstract
Multimodal Large Language Models (MLLMs) are widely used in various fields due to their powerful cross-modal comprehension and generation capabilities. However, more modalities bring more vulnerabilities to being utilized for jailbreak attacks, which induces MLLMs to output harmful content. Due to the strong reasoning ability of MLLMs, previous jailbreak attacks try to explore reasoning safety risk in text modal, while similar threats have been largely overlooked in the visual modal. To fully evaluate potential safety risks in the visual reasoning task, we propose Visual Reasoning Sequential Attack (VRSA), which induces MLLMs to gradually externalize and aggregate complete harmful intent by decomposing the original harmful text into several sequentially related sub-images. In particular, to enhance the rationality of the scene in the image sequence, we propose Adaptive Scene Refinement to optimize the scene most relevant to the original harmful query. To ensure the semantic continuity of the generated image, we propose Semantic Coherent Completion to iteratively rewrite each sub-text combined with contextual information in this scene. In addition, we propose Text-Image Consistency Alignment to keep the semantical consistency. A series of experiments demonstrates that the VRSA can achieve a higher attack success rate compared with the state-of-the-art jailbreak attack methods on both the open-source and closed-source MLLMs such as GPT-4o and Claude-4.5-Sonnet.