Risk-adaptive Activation Steering for Safe Multimodal Large Language Models

📅 2025-10-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal large language models (MLLMs) are vulnerable to adversarial image inputs with latent malicious intent, while existing inference-time defenses suffer from high iterative overhead and excessive rejection. Method: We propose a risk-adaptive activation steering method that, during query processing, enhances cross-modal attention to localize safety-critical image regions, and jointly employs query reconstruction and dynamic risk assessment to modulate neuron activation strength in real time—achieving fine-grained safety alignment without model retraining. Contribution/Results: By embedding risk awareness directly into the forward inference path, our approach balances security and efficiency. Experiments across multiple benchmarks show a 42.3% average reduction in attack success rate, negligible degradation in general task performance (<0.5% drop), and an 18.7% speedup in inference latency—outperforming state-of-the-art inference-time defenses.

Technology Category

Application Category

📝 Abstract
One of the key challenges of modern AI models is ensuring that they provide helpful responses to benign queries while refusing malicious ones. But often, the models are vulnerable to multimodal queries with harmful intent embedded in images. One approach for safety alignment is training with extensive safety datasets at the significant costs in both dataset curation and training. Inference-time alignment mitigates these costs, but introduces two drawbacks: excessive refusals from misclassified benign queries and slower inference speed due to iterative output adjustments. To overcome these limitations, we propose to reformulate queries to strengthen cross-modal attention to safety-critical image regions, enabling accurate risk assessment at the query level. Using the assessed risk, it adaptively steers activations to generate responses that are safe and helpful without overhead from iterative output adjustments. We call this Risk-adaptive Activation Steering (RAS). Extensive experiments across multiple benchmarks on multimodal safety and utility demonstrate that the RAS significantly reduces attack success rates, preserves general task performance, and improves inference speed over prior inference-time defenses.
Problem

Research questions and friction points this paper is trying to address.

Preventing harmful responses to malicious multimodal queries with embedded images
Reducing excessive refusals of benign queries during inference-time safety alignment
Improving inference speed by eliminating iterative output adjustments in safety mechanisms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Strengthens cross-modal attention to safety-critical regions
Adaptively steers activations based on assessed risk
Generates safe responses without iterative output adjustments
🔎 Similar Papers
No similar papers found.