🤖 AI Summary
Multimodal large language models (MLLMs) exhibit high vulnerability to adversarial prompting, yet existing jailbreak evaluation methodologies suffer from severe false positives—many purportedly “successful” responses are in fact benign or irrelevant, leading to inflated estimates of attack efficacy. This work identifies the root cause as the neglect of the trade-off between input relevance and out-of-distribution (OOD) strength in current evaluation paradigms. We propose a four-axis evaluation framework and introduce BSD (Balanced Semantic Decomposition), a recursive rewriting strategy that jointly performs semantic decomposition, subtask reconstruction, OOD signal calibration, and visual cue injection. BSD is the first method to consistently generate harmful content while evading safety filters. Evaluated across 13 mainstream MLLMs, BSD achieves a 67% increase in attack success rate, a 21% rise in harmful output generation, and a significant reduction in refusal rates—exposing critical weaknesses in current safety mechanisms.
📝 Abstract
Multimodal large language models (MLLMs) are widely used in vision-language reasoning tasks. However, their vulnerability to adversarial prompts remains a serious concern, as safety mechanisms often fail to prevent the generation of harmful outputs. Although recent jailbreak strategies report high success rates, many responses classified as "successful" are actually benign, vague, or unrelated to the intended malicious goal. This mismatch suggests that current evaluation standards may overestimate the effectiveness of such attacks. To address this issue, we introduce a four-axis evaluation framework that considers input on-topicness, input out-of-distribution (OOD) intensity, output harmfulness, and output refusal rate. This framework identifies truly effective jailbreaks. In a substantial empirical study, we reveal a structural trade-off: highly on-topic prompts are frequently blocked by safety filters, whereas those that are too OOD often evade detection but fail to produce harmful content. However, prompts that balance relevance and novelty are more likely to evade filters and trigger dangerous output. Building on this insight, we develop a recursive rewriting strategy called Balanced Structural Decomposition (BSD). The approach restructures malicious prompts into semantically aligned sub-tasks, while introducing subtle OOD signals and visual cues that make the inputs harder to detect. BSD was tested across 13 commercial and open-source MLLMs, where it consistently led to higher attack success rates, more harmful outputs, and fewer refusals. Compared to previous methods, it improves success rates by $67%$ and harmfulness by $21%$, revealing a previously underappreciated weakness in current multimodal safety systems.