PRISM: Programmatic Reasoning with Image Sequence Manipulation for LVLM Jailbreaking

📅 2025-07-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing safety alignment mechanisms struggle to defend large vision-language models (LVLMs) against implicit synthesis of harmful content during multi-step reasoning. Method: Inspired by return-oriented programming (ROP) in software security, we propose the first programmable jailbreaking framework: it decomposes malicious intent into multiple semantically benign visual “gadgets” and orchestrates their autonomous composition by LVLMs across reasoning steps via carefully crafted textual prompts—causing harm to emerge only at the final inference stage, thereby evading detection on single-step inputs. Contribution/Results: This work pioneers the application of programmable attack paradigms to vision-language model jailbreaking, exposing a critical gap in current safety defenses—their lack of explicit modeling of the reasoning process. Experiments on SafeBench and MM-SafetyBench demonstrate attack success rates exceeding 0.90, outperforming baselines by up to 0.39, confirming strong efficacy and cross-model generalizability.

Technology Category

Application Category

📝 Abstract
The increasing sophistication of large vision-language models (LVLMs) has been accompanied by advances in safety alignment mechanisms designed to prevent harmful content generation. However, these defenses remain vulnerable to sophisticated adversarial attacks. Existing jailbreak methods typically rely on direct and semantically explicit prompts, overlooking subtle vulnerabilities in how LVLMs compose information over multiple reasoning steps. In this paper, we propose a novel and effective jailbreak framework inspired by Return-Oriented Programming (ROP) techniques from software security. Our approach decomposes a harmful instruction into a sequence of individually benign visual gadgets. A carefully engineered textual prompt directs the sequence of inputs, prompting the model to integrate the benign visual gadgets through its reasoning process to produce a coherent and harmful output. This makes the malicious intent emergent and difficult to detect from any single component. We validate our method through extensive experiments on established benchmarks including SafeBench and MM-SafetyBench, targeting popular LVLMs. Results show that our approach consistently and substantially outperforms existing baselines on state-of-the-art models, achieving near-perfect attack success rates (over 0.90 on SafeBench) and improving ASR by up to 0.39. Our findings reveal a critical and underexplored vulnerability that exploits the compositional reasoning abilities of LVLMs, highlighting the urgent need for defenses that secure the entire reasoning process.
Problem

Research questions and friction points this paper is trying to address.

Exploits LVLM vulnerabilities through multi-step reasoning
Decomposes harmful instructions into benign visual gadgets
Achieves high attack success rates on safety benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decomposes harmful instructions into benign visual gadgets
Uses ROP-inspired textual prompt for sequence control
Exploits LVLM compositional reasoning for emergent malicious output
🔎 Similar Papers
Q
Quanchen Zou
AI Security Lab
Zonghao Ying
Zonghao Ying
SKLCCSE, BUAA
Trustworthy AI
M
Moyang Chen
College of Science, Mathematics and Technology, Wenzhou-Kean University
W
Wenzhuo Xu
AI Security Lab
Yisong Xiao
Yisong Xiao
BUAA
Y
Yakai Li
Institute of Information Engineering, University of the Chinese Academy of Sciences
D
Deyue Zhang
AI Security Lab
D
Dongdong Yang
AI Security Lab
Z
Zhao Liu
AI Security Lab
Xiangzheng Zhang
Xiangzheng Zhang
360
AI safetyLarge language modelsInformation Retrieval