AM3Safety: Towards Data Efficient Alignment of Multi-modal Multi-turn Safety for MLLMs

📅 2026-01-08
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of multimodal large language models to progressive harmful intent attacks in multi-turn dialogues, a challenge inadequately mitigated by existing single-turn alignment methods. To tackle this, the authors introduce InterSafe-V, the first open-source dataset dedicated to multi-turn multimodal safety, comprising 11,270 dialogues. They further propose the AM³Safety framework, which employs a cold-start refusal phase and a turn-aware dual-objective reward mechanism to guide GRPO fine-tuning for efficient and robust safety alignment. Evaluated on Qwen2.5-VL-7B-Instruct and LLaVA-NeXT-7B, the approach reduces attack success rates by over 10%, improves harmlessness by at least 8%, and enhances helpfulness by more than 13%, all while preserving general capabilities.

Technology Category

Application Category

📝 Abstract
Multi-modal Large Language Models (MLLMs) are increasingly deployed in interactive applications. However, their safety vulnerabilities become pronounced in multi-turn multi-modal scenarios, where harmful intent can be gradually reconstructed across turns, and security protocols fade into oblivion as the conversation progresses. Existing Reinforcement Learning from Human Feedback (RLHF) alignment methods are largely developed for single-turn visual question-answer (VQA) task and often require costly manual preference annotations, limiting their effectiveness and scalability in dialogues. To address this challenge, we present InterSafe-V, an open-source multi-modal dialogue dataset containing 11,270 dialogues and 500 specially designed refusal VQA samples. This dataset, constructed through interaction between several models, is designed to more accurately reflect real-world scenarios and includes specialized VQA pairs tailored for specific domains. Building on this dataset, we propose AM$^3$Safety, a framework that combines a cold-start refusal phase with Group Relative Policy Optimization (GRPO) fine-tuning using turn-aware dual-objective rewards across entire dialogues. Experiments on Qwen2.5-VL-7B-Instruct and LLaVA-NeXT-7B show more than 10\% decrease in Attack Success Rate (ASR) together with an increment of at least 8\% in harmless dimension and over 13\% in helpful dimension of MLLMs on multi-modal multi-turn safety benchmarks, while preserving their general abilities.
Problem

Research questions and friction points this paper is trying to address.

Multi-modal Large Language Models
Multi-turn Safety
Alignment
Reinforcement Learning from Human Feedback
Safety Vulnerabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-modal multi-turn safety
AM³Safety
Group Relative Policy Optimization
cold-start refusal
InterSafe-V dataset
H
Han Zhu
Hong Kong University of Science and Technology
J
Jiale Chen
Zhongshan School of Medicine, SUN YAT-SEN UNIVERSITY
C
Chengkun Cai
University of Edinburgh
S
Shengjie Sun
AISpeech
Haoran Li
Haoran Li
University of Science and Technology of China
3D Generation 3D Editing 3D Understanding
Y
Yujin Zhou
Hong Kong University of Science and Technology
Chi-Min Chan
Chi-Min Chan
HKUST
Large Language ModelsPost-TrainingAlignmentLLM Agents
P
Pengcheng Wen
Hong Kong University of Science and Technology
Lei Li
Lei Li
University of Washington, University of Copenhagen
Machine LearningComputer VisionOptimization
Sirui Han
Sirui Han
The Hong Kong University of Science and Technology
Large Language ModelInterdisciplinary Artificial Intelligence
Y
Yike Guo
Hong Kong University of Science and Technology