Cross-Modal Obfuscation for Jailbreak Attacks on Large Vision-Language Models

📅 2025-06-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current safety alignment mechanisms for Large Vision-Language Models (LVLMs) are vulnerable to evasion attacks. Method: We propose CAMO, a cross-modal adversarial obfuscation framework that decomposes malicious instructions into semantically benign, multimodal fragments—textual and visual—and leverages the model’s cross-modal reasoning capability to implicitly reconstruct harmful intent, thereby evading unimodal (text- or image-only) detectors. CAMO integrates multi-step black-box prompt engineering, semantic segmentation and alignment, adversarial multimodal fragment generation, and reasoning-path steering, with configurable inference depth. Results: Experiments demonstrate that CAMO achieves significantly higher attack success rates across mainstream LVLMs, reduces query overhead by over 57%, and attains an average cross-model transfer success rate exceeding 82%. It is the first framework to jointly optimize for high stealthiness and low query cost, exposing critical weaknesses in existing safety alignment paradigms.

Technology Category

Application Category

📝 Abstract
Large Vision-Language Models (LVLMs) demonstrate exceptional performance across multimodal tasks, yet remain vulnerable to jailbreak attacks that bypass built-in safety mechanisms to elicit restricted content generation. Existing black-box jailbreak methods primarily rely on adversarial textual prompts or image perturbations, yet these approaches are highly detectable by standard content filtering systems and exhibit low query and computational efficiency. In this work, we present Cross-modal Adversarial Multimodal Obfuscation (CAMO), a novel black-box jailbreak attack framework that decomposes malicious prompts into semantically benign visual and textual fragments. By leveraging LVLMs'cross-modal reasoning abilities, CAMO covertly reconstructs harmful instructions through multi-step reasoning, evading conventional detection mechanisms. Our approach supports adjustable reasoning complexity and requires significantly fewer queries than prior attacks, enabling both stealth and efficiency. Comprehensive evaluations conducted on leading LVLMs validate CAMO's effectiveness, showcasing robust performance and strong cross-model transferability. These results underscore significant vulnerabilities in current built-in safety mechanisms, emphasizing an urgent need for advanced, alignment-aware security and safety solutions in vision-language systems.
Problem

Research questions and friction points this paper is trying to address.

LVLMs vulnerable to stealthy jailbreak attacks bypassing safety
Existing attacks detectable inefficient lacking cross-modal obfuscation
Need robust defense against multimodal adversarial reconstruction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decomposes malicious prompts into benign fragments
Uses cross-modal reasoning for covert reconstruction
Adjustable complexity and fewer queries for efficiency
L
Lei Jiang
University of Science and Technology of China
Z
Zixun Zhang
The Chinese University of Hong Kong, Shenzhen
Zizhou Wang
Zizhou Wang
A*STAR
AI safetymedical image analysisdomain generalization
Xiaobing Sun
Xiaobing Sun
Yangzhou University
Software EngineeringSoftware Data Analytics
Z
Zhen Li
The Chinese University of Hong Kong, Shenzhen
Liangli Zhen
Liangli Zhen
A*STAR, Singapore
Machine LearningAI SafetyMulti-Objective Optimisation
X
Xiaohua Xu
University of Science and Technology of China