Align is not Enough: Multimodal Universal Jailbreak Attack against Multimodal Large Language Models

๐Ÿ“… 2025-06-02
๐Ÿ›๏ธ IEEE transactions on circuits and systems for video technology (Print)
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the vulnerability of multimodal large language models (MLLMs) to universal jailbreaking attacks under image-text co-inputs. We propose the first unified, cross-model multimodal jailbreaking framework. Our method is the first to systematically expose image-text modality interaction as a critical security weakness, leveraging gradient-driven iterative optimization, cross-model transfer learning, and joint adversarial text suffixes with controllable image perturbations to generate highly transferable jailbreaking inputs. Evaluated on mainstream MLLMsโ€”including LLaVA, Yi-VL, and MiniGPT-4โ€”the framework achieves high attack success rates and output quality, revealing significant defensive blind spots in current alignment mechanisms against multimodal threats. Our core contributions are: (1) establishing the multimodal collaborative vulnerability paradigm; and (2) providing the first reproducible, architecture-agnostic benchmark for universal multimodal jailbreaking attacks.

Technology Category

Application Category

๐Ÿ“ Abstract
Large Language Models (LLMs) have evolved into Multimodal Large Language Models (MLLMs), significantly enhancing their capabilities by integrating visual information and other types, thus aligning more closely with the nature of human intelligence, which processes a variety of data forms beyond just text. Despite advancements, the undesirable generation of these models remains a critical concern, particularly due to vulnerabilities exposed by text-based jailbreak attacks, which have represented a significant threat by challenging existing safety protocols. Motivated by the unique security risks posed by the integration of new and old modalities for MLLMs, we propose a unified multimodal universal jailbreak attack framework that leverages iterative image-text interactions and transfer-based strategy to generate a universal adversarial suffix and image. Our work not only highlights the interaction of image-text modalities can be used as a critical vulnerability but also validates that multimodal universal jailbreak attacks can bring higher-quality undesirable generations across different MLLMs. We evaluate the undesirable context generation of MLLMs like LLaVA, Yi-VL, MiniGPT4, MiniGPT-v2, and InstructBLIP, and reveal significant multimodal safety alignment issues, highlighting the inadequacy of current safety mechanisms against sophisticated multimodal attacks. This study underscores the urgent need for robust safety measures in MLLMs, advocating for a comprehensive review and enhancement of security protocols to mitigate potential risks associated with multimodal capabilities.
Problem

Research questions and friction points this paper is trying to address.

Multimodal Large Language Models vulnerable to universal jailbreak attacks
Image-text interaction exploited as critical security vulnerability
Current safety mechanisms inadequate against multimodal adversarial attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified multimodal universal jailbreak attack framework
Iterative image-text interactions for adversarial generation
Transfer-based strategy for universal adversarial suffix
๐Ÿ”Ž Similar Papers
No similar papers found.
Youze Wang
Youze Wang
Hefei University of Technology
W
Wenbo Hu
School of Computer Science and Information Engineering, Hefei University of Technology, Hefei 230009, China
Yinpeng Dong
Yinpeng Dong
Tsinghua University
Machine LearningDeep LearningAI Safety
J
Jing Liu
Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China
H
Hanwang Zhang
School of Computer Science and Engineering, Nanyang Technological University, Singapore 639798
Richang Hong
Richang Hong
Hefei University of Technology
MultimediaPattern Recognition