SafeGRPO: Self-Rewarded Multimodal Safety Alignment via Rule-Governed Policy Optimization

📅 2025-11-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal large language models (MLLMs) exhibit novel compositional safety risks during cross-modal coupling—individually benign unimodal inputs may jointly yield harmful semantic interpretations, revealing insufficient safety awareness. Method: We propose Rule-Guided Self-Reward Alignment (RGSRA), a safety alignment framework integrating structured safety reasoning, rule-constrained reward modeling, and stepwise safety-aware thinking. It employs Rule-Guided Policy Optimization (GRPO) and self-reward reinforcement learning to achieve interpretable, verifiable alignment. Contribution/Results: We introduce SafeTag-VL-3K, a novel multimodal safety dataset annotated with visual, textual, and compositional safety labels. Experiments demonstrate significant improvements in compositional robustness and reasoning stability across multimodal safety benchmarks, without compromising general capabilities.

Technology Category

Application Category

📝 Abstract
Multimodal large language models (MLLMs) have demonstrated impressive reasoning and instruction-following capabilities, yet their expanded modality space introduces new compositional safety risks that emerge from complex text-image interactions. Such cross-modal couplings can produce unsafe semantics even when individual inputs are benign, exposing the fragile safety awareness of current MLLMs. While recent works enhance safety by guiding models to reason about potential risks, unregulated reasoning traces may compromise alignment; although Group Relative Policy Optimization (GRPO) offers self-rewarded refinement without human supervision, it lacks verifiable signals for reasoning safety. To address this, we propose SafeGRPO a self-rewarded multimodal safety alignment framework that integrates rule-governed reward construction into GRPO, enabling interpretable and verifiable optimization of reasoning safety. Built upon the constructed SafeTag-VL-3K dataset with explicit visual, textual, and combined safety tags, SafeGRPO performs step-guided safety thinking to enforce structured reasoning and behavior alignment, substantially improving multimodal safety awareness, compositional robustness, and reasoning stability across diverse benchmarks without sacrificing general capabilities.
Problem

Research questions and friction points this paper is trying to address.

Addresses multimodal safety risks from text-image interactions
Enhances verifiable safety alignment without human supervision
Improves compositional robustness and reasoning stability in MLLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates rule-governed reward construction into GRPO
Performs step-guided safety thinking for structured reasoning
Uses multimodal dataset with explicit safety tags
🔎 Similar Papers
No similar papers found.