🤖 AI Summary
Current vision-language models (VLMs) face novel security risks in multimodal reasoning, as existing unimodal alignment methods and coarse-grained safety datasets fail to support fine-grained, policy-driven safety control. To address this, we propose the first policy-anchored multimodal safety alignment framework tailored for reasoning-oriented VLMs. Our method comprises three core components: (1) constructing a high-quality multimodal safety reasoning dataset grounded in standardized safety policies; (2) designing a policy-guided reasoning generation pipeline augmented with multimodal diversity synthesis; and (3) integrating a powerful multimodal judge model for rigorous quality filtering. Experiments demonstrate that our approach significantly improves robustness against both textual and image-text jailbreaking attacks, while preserving—and in some cases enhancing—original multimodal reasoning capabilities. This work establishes a scalable, interpretable paradigm for safety alignment of VLMs, advancing principled, policy-aware multimodal trustworthiness.
📝 Abstract
Vision-Language Models (VLMs) have achieved remarkable progress in multimodal reasoning tasks through enhanced chain-of-thought capabilities. However, this advancement also introduces novel safety risks, as these models become increasingly vulnerable to harmful multimodal prompts that can trigger unethical or unsafe behaviors. Existing safety alignment approaches, primarily designed for unimodal language models, fall short in addressing the complex and nuanced threats posed by multimodal inputs. Moreover, current safety datasets lack the fine-grained, policy-grounded reasoning required to robustly align reasoning-capable VLMs. In this work, we introduce {MSR-Align}, a high-quality Multimodal Safety Reasoning dataset tailored to bridge this gap. MSR-Align supports fine-grained, deliberative reasoning over standardized safety policies across both vision and text modalities. Our data generation pipeline emphasizes multimodal diversity, policy-grounded reasoning, and rigorous quality filtering using strong multimodal judges. Extensive experiments demonstrate that fine-tuning VLMs on MSR-Align substantially improves robustness against both textual and vision-language jailbreak attacks, while preserving or enhancing general reasoning performance. MSR-Align provides a scalable and effective foundation for advancing the safety alignment of reasoning-capable VLMs. Our dataset is made publicly available at https://huggingface.co/datasets/Leigest/MSR-Align.