GuardAlign: Test-time Safety Alignment in Multimodal Large Language Models

📅 2026-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that multimodal large language models (MLLMs) often fail to reliably detect unsafe images in complex scenarios and suffer from unstable safety signals during generation, leading to risky outputs. To mitigate this, the authors propose GuardAlign, a training-free, test-time safety alignment framework that, for the first time, leverages optimal transport to measure the distributional distance between image patches and unsafe semantics. GuardAlign introduces a cross-layer attention redistribution mechanism combined with safety-aware prefix guidance to stably inject safety signals throughout the entire generation process. Extensive experiments across six prominent MLLMs demonstrate that GuardAlign reduces unsafe response rates by up to 39% on the SPA-VL benchmark while simultaneously improving VQAv2 accuracy from 78.51% to 79.21%.

Technology Category

Application Category

📝 Abstract
Large vision-language models (LVLMs) have achieved remarkable progress in vision-language reasoning tasks, yet ensuring their safety remains a critical challenge. Recent input-side defenses detect unsafe images with CLIP and prepend safety prefixes to prompts, but they still suffer from inaccurate detection in complex scenes and unstable safety signals during decoding. To address these issues, we propose GuardAlign, a training-free defense framework that integrates two strategies. First, OT-enhanced safety detection leverages optimal transport to measure distribution distances between image patches and unsafe semantics, enabling accurate identification of malicious regions without additional computational cost. Second, cross-modal attentive calibration strengthens the influence of safety prefixes by adaptively reallocating attention across layers, ensuring that safety signals remain consistently activated throughout generation. Extensive evaluations on six representative MLLMs demonstrate that GuardAlign reduces unsafe response rates by up to 39% on SPA-VL, while preserving utility, achieving an improvement on VQAv2 from 78.51% to 79.21%.
Problem

Research questions and friction points this paper is trying to address.

safety alignment
multimodal large language models
unsafe image detection
safety signals
vision-language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimal Transport
Safety Alignment
Cross-modal Attention
Training-free Defense
Multimodal Large Language Models
🔎 Similar Papers
No similar papers found.