SafeMT: Multi-turn Safety for Multimodal Language Models

📅 2025-10-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal large language models (MLLMs) exhibit escalating safety risks in multi-turn dialogues, yet no systematic benchmark exists to evaluate such cumulative threats. Method: We introduce SafeMT, the first multi-turn multimodal safety benchmark, comprising 10,000 samples spanning 17 risk categories and 4 jailbreaking strategies. We propose a novel dialogue-level Safety Index (SI) and design a strategy-guided safety moderator capable of detecting latent malicious intent across turns. Contribution/Results: Experiments reveal that attack success rates against state-of-the-art MLLMs increase significantly with turn count, confirming the accumulation of safety vulnerabilities. Integrating our moderator substantially reduces multi-turn attack success across multiple open-source MLLMs, demonstrating robust defense against progressive adversarial exploitation. This work provides the first systematic characterization and mitigation of cumulative safety risks in multi-turn multimodal dialogue.

Technology Category

Application Category

📝 Abstract
With the widespread use of multi-modal Large Language models (MLLMs), safety issues have become a growing concern. Multi-turn dialogues, which are more common in everyday interactions, pose a greater risk than single prompts; however, existing benchmarks do not adequately consider this situation. To encourage the community to focus on the safety issues of these models in multi-turn dialogues, we introduce SafeMT, a benchmark that features dialogues of varying lengths generated from harmful queries accompanied by images. This benchmark consists of 10,000 samples in total, encompassing 17 different scenarios and four jailbreak methods. Additionally, we propose Safety Index (SI) to evaluate the general safety of MLLMs during conversations. We assess the safety of 17 models using this benchmark and discover that the risk of successful attacks on these models increases as the number of turns in harmful dialogues rises. This observation indicates that the safety mechanisms of these models are inadequate for recognizing the hazard in dialogue interactions. We propose a dialogue safety moderator capable of detecting malicious intent concealed within conversations and providing MLLMs with relevant safety policies. Experimental results from several open-source models indicate that this moderator is more effective in reducing multi-turn ASR compared to existed guard models.
Problem

Research questions and friction points this paper is trying to address.

Addressing multi-turn dialogue safety risks in multimodal large language models
Evaluating model vulnerability to escalating attacks in extended conversations
Developing detection mechanisms for malicious intent hidden across dialogue turns
Innovation

Methods, ideas, or system contributions that make the work stand out.

SafeMT benchmark for multi-turn dialogue safety
Safety Index for evaluating model conversation risks
Dialogue safety moderator detecting concealed malicious intent
🔎 Similar Papers
No similar papers found.
H
Han Zhu
Hong Kong University of Science and Technology
J
Juntao Dai
Hong Kong University of Science and Technology
J
Jiaming Ji
Peking University
H
Haoran Li
Hong Kong University of Science and Technology
C
Chengkun Cai
University of Edinburgh
P
Pengcheng Wen
Hong Kong University of Science and Technology
Chi-Min Chan
Chi-Min Chan
HKUST
Large Language ModelsPost-TrainingAlignmentLLM Agents
B
Boyuan Chen
Peking University
Y
Yaodong Yang
Peking University
Sirui Han
Sirui Han
The Hong Kong University of Science and Technology
Large Language ModelInterdisciplinary Artificial Intelligence
Y
Yike Guo
Hong Kong University of Science and Technology