Risk Awareness Injection: Calibrating Vision-Language Models for Safety without Compromising Utility

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of vision-language models (VLMs) to multimodal jailbreak attacks, a critical security concern that existing defenses often mitigate at the cost of model utility or through computationally expensive retraining. To overcome these limitations, the authors propose a lightweight, training-free safety calibration framework. By constructing an unsafe prototype subspace from language embeddings, the method selectively modulates high-risk visual tokens to activate safety-critical signals in the cross-modal feature space—without altering model parameters—thereby restoring the VLM’s LLM-like risk awareness. Experimental results demonstrate that this approach substantially reduces the success rate of diverse jailbreak attacks while preserving competitive performance across a range of downstream tasks.

Technology Category

Application Category

📝 Abstract
Vision language models (VLMs) extend the reasoning capabilities of large language models (LLMs) to cross-modal settings, yet remain highly vulnerable to multimodal jailbreak attacks. Existing defenses predominantly rely on safety fine-tuning or aggressive token manipulations, incurring substantial training costs or significantly degrading utility. Recent research shows that LLMs inherently recognize unsafe content in text, and the incorporation of visual inputs in VLMs frequently dilutes risk-related signals. Motivated by this, we propose Risk Awareness Injection (RAI), a lightweight and training-free framework for safety calibration that restores LLM-like risk recognition by amplifying unsafe signals in VLMs. Specifically, RAI constructs an Unsafe Prototype Subspace from language embeddings and performs targeted modulation on selected high-risk visual tokens, explicitly activating safety-critical signals within the cross-modal feature space. This modulation restores the model's LLM-like ability to detect unsafe content from visual inputs, while preserving the semantic integrity of original tokens for cross-modal reasoning. Extensive experiments across multiple jailbreak and utility benchmarks demonstrate that RAI substantially reduces attack success rate without compromising task performance.
Problem

Research questions and friction points this paper is trying to address.

vision-language models
multimodal jailbreak attacks
safety calibration
risk awareness
utility preservation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Risk Awareness Injection
Vision-Language Models
Multimodal Jailbreak Defense
Unsafe Prototype Subspace
Training-Free Calibration
🔎 Similar Papers
M
Mengxuan Wang
Shien-Ming Wu School of Intelligent Engineering, South China University of Technology, Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ)
Y
Yuxin Chen
Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ), Computer Science and Engineering, The Hong Kong University of Science and Technology
G
Gang Xu
Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ)
Tao He
Tao He
UESTC
Image RetrievalComputer Vision
Hongjie Jiang
Hongjie Jiang
Shien-Ming Wu School of Intelligent Engineering, South China University of Technology
bioMEMSSoft MaterialsFlexible Electronicsensors
Ming Li
Ming Li
Senior Research Scientist, Guangming Lab
AIGCMLLMsEmbodied AI