🤖 AI Summary
This work addresses a critical blind spot in the safety alignment of large vision-language models (LVLMs), which typically defend against explicit malicious inputs but overlook systemic vulnerabilities that can be exploited through compositional reasoning to induce harmful logic. Inspired by return-oriented programming in systems security, we introduce a novel multimodal attack paradigm grounded in semantic orthogonality and spatial isolation: by chaining benign visual “gadgets,” our method triggers harmful outputs late in the reasoning process, thereby bypassing perception-layer alignment mechanisms. We develop an automated framework to generate and optimize semantically disentangled visual components, delaying feature fusion to precisely steer inference. Evaluations on SafeBench and MM-SafetyBench across seven prominent LVLMs—including GPT-4o and Claude 3.7 Sonnet—demonstrate that our approach outperforms the strongest baseline by 4.67% on average for open-source models and by 9.50% for commercial models.
📝 Abstract
Large Vision-Language Models (LVLMs) undergo safety alignment to suppress harmful content. However, current defenses predominantly target explicit malicious patterns in the input representation, often overlooking the vulnerabilities inherent in compositional reasoning. In this paper, we identify a systemic flaw where LVLMs can be induced to synthesize harmful logic from benign premises. We formalize this attack paradigm as \textit{Reasoning-Oriented Programming}, drawing a structural analogy to Return-Oriented Programming in systems security. Just as ROP circumvents memory protections by chaining benign instruction sequences, our approach exploits the model's instruction-following capability to orchestrate a semantic collision of orthogonal benign inputs. We instantiate this paradigm via \tool{}, an automated framework that optimizes for \textit{semantic orthogonality} and \textit{spatial isolation}. By generating visual gadgets that are semantically decoupled from the harmful intent and arranging them to prevent premature feature fusion, \tool{} forces the malicious logic to emerge only during the late-stage reasoning process. This effectively bypasses perception-level alignment. We evaluate \tool{} on SafeBench and MM-SafetyBench across 7 state-of-the-art 0.LVLMs, including GPT-4o and Claude 3.7 Sonnet. Our results demonstrate that \tool{} consistently circumvents safety alignment, outperforming the strongest existing baseline by an average of 4.67\% on open-source models and 9.50\% on commercial models.