Robust Multimodal Safety via Conditional Decoding

📅 2026-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the significant degradation in safety alignment of multimodal large language models when confronted with harmful queries that exploit cross-modal interactions. To mitigate this vulnerability, the authors propose CASA, a conditional decoding strategy that predicts binary safety tokens based on internal model representations and incorporates a safety-aware attention module to enhance malicious query detection. Notably, CASA achieves robust cross-modal safety alignment without relying on external classifiers, auxiliary heads, or modality-specific fine-tuning. Experimental results demonstrate that CASA reduces attack success rates by over 97% on average across multiple benchmarks while preserving strong utility on benign inputs, as validated through both automated metrics and human evaluation.
📝 Abstract
Multimodal large-language models (MLLMs) often experience degraded safety alignment when harmful queries exploit cross-modal interactions. Models aligned on text alone show a higher rate of successful attacks when extended to two or more modalities. In this work, we propose a simple conditional decoding strategy, CASA (Classification Augmented with Safety Attention) that utilizes internal representations of MLLMs to predict a binary safety token before response generation. We introduce a novel safety attention module designed to enhance the model's ability to detect malicious queries. Our design ensures robust safety alignment without relying on any external classifier or auxiliary head, and without the need for modality-specific safety fine-tuning. On diverse benchmarks such as MM-SafetyBench, JailbreakV-28k, and adversarial audio tests, CASA lowers the average attack success rate by more than 97% across modalities and across attack types. Our empirical evaluations also show that CASA maintains strong utility in benign inputs, a result validated through both automated and human evaluations (via 13 trained annotators). Together, these results highlight CASA as a simple and generalizable framework to improve multimodal LLM safety.
Problem

Research questions and friction points this paper is trying to address.

multimodal safety
safety alignment
cross-modal interactions
harmful queries
adversarial attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

conditional decoding
safety attention
multimodal LLMs
safety alignment
adversarial robustness
🔎 Similar Papers
No similar papers found.