Directional Embedding Smoothing for Robust Vision Language Models

📅 2026-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of vision-language models (VLMs) to multimodal jailbreaking attacks, which can compromise safety alignment and induce harmful outputs. To mitigate this risk, the authors extend the Random Embedding Smoothing with Token Aggregation (RESTA) framework to VLMs by introducing a directional embedding smoothing strategy. Specifically, during inference, noise is injected along the direction of the original token embeddings, forming a lightweight defensive layer. This approach substantially enhances model robustness against diverse jailbreaking attacks, significantly reducing attack success rates on the JailBreakV-28K benchmark. The results demonstrate the effectiveness and practicality of the proposed method as a plug-and-play component for improving AI safety in multimodal systems.

Technology Category

Application Category

📝 Abstract
The safety and reliability of vision-language models (VLMs) are a crucial part of deploying trustworthy agentic AI systems. However, VLMs remain vulnerable to jailbreaking attacks that undermine their safety alignment to yield harmful outputs. In this work, we extend the Randomized Embedding Smoothing and Token Aggregation (RESTA) defense to VLMs and evaluate its performance against the JailBreakV-28K benchmark of multi-modal jailbreaking attacks. We find that RESTA is effective in reducing attack success rate over this diverse corpus of attacks, in particular, when employing directional embedding noise, where the injected noise is aligned with the original token embedding vectors. Our results demonstrate that RESTA can contribute to securing VLMs within agentic systems, as a lightweight, inference-time defense layer of an overall security framework.
Problem

Research questions and friction points this paper is trying to address.

vision-language models
jailbreaking attacks
safety alignment
harmful outputs
trustworthy AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

Directional Embedding Smoothing
Vision-Language Models
Jailbreaking Defense
Randomized Smoothing
Inference-Time Security
🔎 Similar Papers
No similar papers found.