Reasoning-Oriented Programming: Chaining Semantic Gadgets to Jailbreak Large Vision Language Models

📅 2026-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a critical blind spot in the safety alignment of large vision-language models (LVLMs), which typically defend against explicit malicious inputs but overlook systemic vulnerabilities that can be exploited through compositional reasoning to induce harmful logic. Inspired by return-oriented programming in systems security, we introduce a novel multimodal attack paradigm grounded in semantic orthogonality and spatial isolation: by chaining benign visual “gadgets,” our method triggers harmful outputs late in the reasoning process, thereby bypassing perception-layer alignment mechanisms. We develop an automated framework to generate and optimize semantically disentangled visual components, delaying feature fusion to precisely steer inference. Evaluations on SafeBench and MM-SafetyBench across seven prominent LVLMs—including GPT-4o and Claude 3.7 Sonnet—demonstrate that our approach outperforms the strongest baseline by 4.67% on average for open-source models and by 9.50% for commercial models.

Technology Category

Application Category

📝 Abstract
Large Vision-Language Models (LVLMs) undergo safety alignment to suppress harmful content. However, current defenses predominantly target explicit malicious patterns in the input representation, often overlooking the vulnerabilities inherent in compositional reasoning. In this paper, we identify a systemic flaw where LVLMs can be induced to synthesize harmful logic from benign premises. We formalize this attack paradigm as \textit{Reasoning-Oriented Programming}, drawing a structural analogy to Return-Oriented Programming in systems security. Just as ROP circumvents memory protections by chaining benign instruction sequences, our approach exploits the model's instruction-following capability to orchestrate a semantic collision of orthogonal benign inputs. We instantiate this paradigm via \tool{}, an automated framework that optimizes for \textit{semantic orthogonality} and \textit{spatial isolation}. By generating visual gadgets that are semantically decoupled from the harmful intent and arranging them to prevent premature feature fusion, \tool{} forces the malicious logic to emerge only during the late-stage reasoning process. This effectively bypasses perception-level alignment. We evaluate \tool{} on SafeBench and MM-SafetyBench across 7 state-of-the-art 0.LVLMs, including GPT-4o and Claude 3.7 Sonnet. Our results demonstrate that \tool{} consistently circumvents safety alignment, outperforming the strongest existing baseline by an average of 4.67\% on open-source models and 9.50\% on commercial models.
Problem

Research questions and friction points this paper is trying to address.

Large Vision-Language Models
safety alignment
compositional reasoning
harmful logic synthesis
semantic vulnerabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reasoning-Oriented Programming
Semantic Gadgets
Safety Alignment Bypass
Compositional Reasoning
Large Vision-Language Models
🔎 Similar Papers
No similar papers found.
Q
Quanchen Zou
360 AI Security Lab
M
Moyang Chen
College of Science, Mathematics and Technology, Wenzhou-Kean University
Zonghao Ying
Zonghao Ying
SKLCCSE, BUAA
Trustworthy AI
W
Wenzhuo Xu
360 AI Security Lab
Yisong Xiao
Yisong Xiao
BUAA
D
Deyue Zhang
360 AI Security Lab
D
Dongdong Yang
360 AI Security Lab
Z
Zhao Liu
360 AI Security Lab
Xiangzheng Zhang
Xiangzheng Zhang
360
AI safetyLarge language modelsInformation Retrieval