Automating Steering for Safe Multimodal Large Language Models

📅 2025-07-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal large language models (MLLMs) face significant security risks from adversarial multimodal inputs. This paper proposes AutoSteer, a fine-tuning-free, inference-time safety intervention framework that enhances robustness across textual, visual, and cross-modal scenarios by dynamically detecting and regulating the generation process. Its core contributions are: (1) a Safety-Aware Score (SAS) that automatically identifies critical-layer sensitive features during inference; and (2) an adaptive safety probe coupled with a lightweight rejection head, enabling modular, interpretable, and real-time intervention. Extensive experiments on LLaVA-OV and Chameleon demonstrate that AutoSteer substantially reduces success rates across diverse adversarial attacks—including prompt injection, image perturbation, and multimodal jailbreaking—while preserving the model’s inherent general capabilities and generation quality with negligible degradation. The framework requires no architectural modification or retraining, making it broadly applicable to existing MLLMs.

Technology Category

Application Category

📝 Abstract
Recent progress in Multimodal Large Language Models (MLLMs) has unlocked powerful cross-modal reasoning abilities, but also raised new safety concerns, particularly when faced with adversarial multimodal inputs. To improve the safety of MLLMs during inference, we introduce a modular and adaptive inference-time intervention technology, AutoSteer, without requiring any fine-tuning of the underlying model. AutoSteer incorporates three core components: (1) a novel Safety Awareness Score (SAS) that automatically identifies the most safety-relevant distinctions among the model's internal layers; (2) an adaptive safety prober trained to estimate the likelihood of toxic outputs from intermediate representations; and (3) a lightweight Refusal Head that selectively intervenes to modulate generation when safety risks are detected. Experiments on LLaVA-OV and Chameleon across diverse safety-critical benchmarks demonstrate that AutoSteer significantly reduces the Attack Success Rate (ASR) for textual, visual, and cross-modal threats, while maintaining general abilities. These findings position AutoSteer as a practical, interpretable, and effective framework for safer deployment of multimodal AI systems.
Problem

Research questions and friction points this paper is trying to address.

Enhancing safety of Multimodal Large Language Models against adversarial inputs
Introducing modular inference-time intervention without model fine-tuning
Reducing attack success rates while preserving model capabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modular adaptive inference-time intervention technology
Safety Awareness Score identifies safety distinctions
Lightweight Refusal Head modulates risky generation
🔎 Similar Papers
No similar papers found.