JailBound: Jailbreaking Internal Safety Boundaries of Vision-Language Models

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vision-language models (VLMs) inherit enlarged attack surfaces from powerful visual encoders; existing jailbreaking methods lack explicit attack objectives and neglect cross-modal interactions, resulting in ambiguous gradient directions and susceptibility to local optima. Method: This paper first uncovers a learnable intrinsic safety boundary within the latent space of VLM fusion layers. We propose a two-stage cross-modal collaborative adversarial framework: (1) safety boundary probing via latent-space knowledge distillation to yield differentiable gradient guidance; and (2) joint optimization of image and text perturbations to traverse the boundary while preserving semantic consistency. The method supports both white-box and black-box settings. Results: Evaluated on six mainstream VLMs, our approach achieves average attack success rates of 94.32% (white-box) and 67.28% (black-box), outperforming state-of-the-art methods by 6.17% and 21.13%, respectively—significantly exposing deep security vulnerabilities in VLMs.

Technology Category

Application Category

📝 Abstract
Vision-Language Models (VLMs) exhibit impressive performance, yet the integration of powerful vision encoders has significantly broadened their attack surface, rendering them increasingly susceptible to jailbreak attacks. However, lacking well-defined attack objectives, existing jailbreak methods often struggle with gradient-based strategies prone to local optima and lacking precise directional guidance, and typically decouple visual and textual modalities, thereby limiting their effectiveness by neglecting crucial cross-modal interactions. Inspired by the Eliciting Latent Knowledge (ELK) framework, we posit that VLMs encode safety-relevant information within their internal fusion-layer representations, revealing an implicit safety decision boundary in the latent space. This motivates exploiting boundary to steer model behavior. Accordingly, we propose JailBound, a novel latent space jailbreak framework comprising two stages: (1) Safety Boundary Probing, which addresses the guidance issue by approximating decision boundary within fusion layer's latent space, thereby identifying optimal perturbation directions towards the target region; and (2) Safety Boundary Crossing, which overcomes the limitations of decoupled approaches by jointly optimizing adversarial perturbations across both image and text inputs. This latter stage employs an innovative mechanism to steer the model's internal state towards policy-violating outputs while maintaining cross-modal semantic consistency. Extensive experiments on six diverse VLMs demonstrate JailBound's efficacy, achieves 94.32% white-box and 67.28% black-box attack success averagely, which are 6.17% and 21.13% higher than SOTA methods, respectively. Our findings expose a overlooked safety risk in VLMs and highlight the urgent need for more robust defenses. Warning: This paper contains potentially sensitive, harmful and offensive content.
Problem

Research questions and friction points this paper is trying to address.

Jailbreaking safety boundaries in Vision-Language Models
Overcoming gradient-based attack limitations in VLMs
Exploiting latent space for cross-modal adversarial attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Probes safety boundary in latent space
Jointly optimizes image-text perturbations
Steers model to violate safety policies
🔎 Similar Papers
No similar papers found.
Jiaxin Song
Jiaxin Song
University of Illinois, Urbana-Champaign
Algorithmic game theoryProgramming languages
Y
Yixu Wang
Shanghai Artificial Intelligence Laboratory, Fudan University
J
Jie Li
Shanghai Artificial Intelligence Laboratory
R
Rui Yu
NSFOCUS
Y
Yan Teng
Shanghai Artificial Intelligence Laboratory
Xingjun Ma
Xingjun Ma
Fudan University
Trustworthy AIMultimodal AIGenerative AIEmbodied AI
Y
Yingchun Wang
Shanghai Artificial Intelligence Laboratory