VLMGuard-R1: Proactive Safety Alignment for VLMs via Reasoning-Driven Prompt Optimization

📅 2025-04-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses novel security risks in vision-language models (VLMs) arising from multimodal interactions. To mitigate these threats without fine-tuning model parameters, we propose an active prompt optimization framework grounded in reasoning-driven cross-modal prompt rewriting. Our method introduces a three-stage inference pipeline—threat identification, intent reconstruction, and response generation—that enables fine-grained threat inference and actionable safety responses while avoiding generic refusal. Technically, the framework integrates multimodal reasoning modeling, a learnable prompt rewriter, high-quality synthetic safety data generation, and zero-shot transfer adaptation. Extensive experiments across three benchmarks and five state-of-the-art VLMs demonstrate that our approach consistently outperforms four baseline methods, achieving an average 43.59% improvement in safety on the SIUO benchmark.

Technology Category

Application Category

📝 Abstract
Aligning Vision-Language Models (VLMs) with safety standards is essential to mitigate risks arising from their multimodal complexity, where integrating vision and language unveils subtle threats beyond the reach of conventional safeguards. Inspired by the insight that reasoning across modalities is key to preempting intricate vulnerabilities, we propose a novel direction for VLM safety: multimodal reasoning-driven prompt rewriting. To this end, we introduce VLMGuard-R1, a proactive framework that refines user inputs through a reasoning-guided rewriter, dynamically interpreting text-image interactions to deliver refined prompts that bolster safety across diverse VLM architectures without altering their core parameters. To achieve this, we devise a three-stage reasoning pipeline to synthesize a dataset that trains the rewriter to infer subtle threats, enabling tailored, actionable responses over generic refusals. Extensive experiments across three benchmarks with five VLMs reveal that VLMGuard-R1 outperforms four baselines. In particular, VLMGuard-R1 achieves a remarkable 43.59% increase in average safety across five models on the SIUO benchmark.
Problem

Research questions and friction points this paper is trying to address.

Aligning VLMs with safety standards to mitigate multimodal risks
Proactively refining user inputs via reasoning-driven prompt rewriting
Enhancing VLM safety without modifying core model parameters
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal reasoning-driven prompt rewriting
Proactive framework refining user inputs
Three-stage reasoning pipeline for threat inference
🔎 Similar Papers
No similar papers found.