PRISM: Robust VLM Alignment with Principled Reasoning for Integrated Safety in Multimodality

📅 2025-08-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing two key bottlenecks in vision-language model (VLM) safety alignment—namely, *over-defensiveness undermining utility* and *shallow alignment failing to detect complex, reasoning-intensive threats*—this paper introduces PRISM, the first framework to integrate System-2–style structured reasoning into VLM safety alignment. PRISM comprises (i) PRISM-CoT, a safety-aware chain-of-thought dataset, and (ii) PRISM-DPO, a novel training method synergizing Monte Carlo Tree Search with direct preference optimization. The framework enables fine-grained safety boundary modeling while preserving or even enhancing model utility. Empirically, PRISM achieves only 0.15% attack success rate on JailbreakV-28K, outperforms state-of-the-art methods by 90% on VLBreak, and maintains a low 8.70% attack rate against adaptive attacks and multi-image MIS benchmarks—demonstrating robustness without sacrificing practical performance.

Technology Category

Application Category

📝 Abstract
Safeguarding vision-language models (VLMs) is a critical challenge, as existing methods often suffer from over-defense, which harms utility, or rely on shallow alignment, failing to detect complex threats that require deep reasoning. To this end, we introduce PRISM (Principled Reasoning for Integrated Safety in Multimodality), a system2-like framework that aligns VLMs by embedding a structured, safety-aware reasoning process. Our framework consists of two key components: PRISM-CoT, a dataset that teaches safety-aware chain-of-thought reasoning, and PRISM-DPO, generated via Monte Carlo Tree Search (MCTS) to further refine this reasoning through Direct Preference Optimization to help obtain a delicate safety boundary. Comprehensive evaluations demonstrate PRISM's effectiveness, achieving remarkably low attack success rates including 0.15% on JailbreakV-28K for Qwen2-VL and 90% improvement over the previous best method on VLBreak for LLaVA-1.5. PRISM also exhibits strong robustness against adaptive attacks, significantly increasing computational costs for adversaries, and generalizes effectively to out-of-distribution challenges, reducing attack success rates to just 8.70% on the challenging multi-image MIS benchmark. Remarkably, this robust defense is achieved while preserving, and in some cases enhancing, model utility. To promote reproducibility, we have made our code, data, and model weights available at https://github.com/SaFoLab-WISC/PRISM.
Problem

Research questions and friction points this paper is trying to address.

Addressing over-defense and shallow alignment in vision-language models
Detecting complex threats requiring deep reasoning for safety
Balancing robust safety with preserved model utility
Innovation

Methods, ideas, or system contributions that make the work stand out.

Embedded structured safety-aware reasoning process
Monte Carlo Tree Search for Direct Preference Optimization
Safety-aware chain-of-thought reasoning dataset
🔎 Similar Papers
No similar papers found.