🤖 AI Summary
Current multimodal large language models (MLLMs) lack context-aware safety decision-making capabilities in real-world scenarios, suffering from a trade-off between oversensitivity (false rejection of benign queries) and undersensitivity (failure to detect visual risks). To address this, we propose a lightweight, model-agnostic decoding regulation framework that innovatively integrates contrastive decoding—comparing responses to the original image versus Gaussian-noise-perturbed images—with global-aware token modulation, enabling token-level, multimodal context-driven dynamic safety response calibration. Our method establishes a synergistic mechanism between visual sensitive-token identification and scene-level safety judgment, significantly improving context-dependent rejection accuracy. It achieves substantial gains in safety alignment across diverse MLLM architectures and established safety benchmarks, while preserving the base model’s helpfulness.
📝 Abstract
Multimodal Large Language Models (MLLMs) are increasingly deployed in real-world applications, yet their ability to make context-aware safety decisions remains limited. Existing methods often fail to balance oversensitivity (unjustified refusals of benign queries) and undersensitivity (missed detection of visually grounded risks), leaving a persistent gap in safety alignment. To address this issue, we introduce Safety-aware Contrastive Decoding (SafeCoDe), a lightweight and model-agnostic decoding framework that dynamically adjusts token generation based on multimodal context. SafeCoDe operates in two stages: (1) a contrastive decoding mechanism that highlights tokens sensitive to visual context by contrasting real and Gaussian-noised images, and (2) a global-aware token modulation strategy that integrates scene-level reasoning with token-level adjustment to adapt refusals according to the predicted safety verdict. Extensive experiments across diverse MLLM architectures and safety benchmarks, covering undersensitivity, oversensitivity, and general safety evaluations, show that SafeCoDe consistently improves context-sensitive refusal behaviors while preserving model helpfulness.