🤖 AI Summary
Current VLM safety research suffers from two critical limitations: (1) incomplete safety benchmarking—neglecting implicit harms arising from image-text interactions—and (2) absence of endogenous safety mechanisms. To address these, we propose HoliSafe, the first comprehensive safety benchmark covering all five categories of image-text safety combinations, and SafeLLaVA, a novel VLM architecture featuring: (i) learnable safety meta-tokens that implicitly encode visual harmful cues and steer safe responses; and (ii) an interpretable safety head jointly predicting refusal decisions and fine-grained harm classifications. Our methodology integrates multimodal safety data construction, dual-head co-training (generative + discriminative), and rejection-reason–guided supervision. Experiments demonstrate that SafeLLaVA achieves state-of-the-art performance across multiple safety benchmarks. Moreover, HoliSafe systematically uncovers the intrinsic vulnerability of mainstream VLMs to context-sensitive jailbreaking attacks—a phenomenon previously uncharacterized.
📝 Abstract
Despite emerging efforts to enhance the safety of Vision-Language Models (VLMs), current approaches face two main shortcomings. 1) Existing safety-tuning datasets and benchmarks only partially consider how image-text interactions can yield harmful content, often overlooking contextually unsafe outcomes from seemingly benign pairs. This narrow coverage leaves VLMs vulnerable to jailbreak attacks in unseen configurations. 2) Prior methods rely primarily on data-centric tuning, with limited architectural innovations to intrinsically strengthen safety. We address these gaps by introducing a holistic safety dataset and benchmark, HoliSafe, that spans all five safe/unsafe image-text combinations, providing a more robust basis for both training and evaluation. We further propose SafeLLaVA, a novel VLM augmented with a learnable safety meta token and a dedicated safety head. The meta token encodes harmful visual cues during training, intrinsically guiding the language model toward safer responses, while the safety head offers interpretable harmfulness classification aligned with refusal rationales. Experiments show that SafeLLaVA, trained on HoliSafe, achieves state-of-the-art safety performance across multiple VLM benchmarks. Additionally, the HoliSafe benchmark itself reveals critical vulnerabilities in existing models. We hope that HoliSafe and SafeLLaVA will spur further research into robust and interpretable VLM safety, expanding future avenues for multimodal alignment.