🤖 AI Summary
Multimodal large language models (MLLMs) exhibit escalating safety risks in multi-turn dialogues, yet no systematic benchmark exists to evaluate such cumulative threats. Method: We introduce SafeMT, the first multi-turn multimodal safety benchmark, comprising 10,000 samples spanning 17 risk categories and 4 jailbreaking strategies. We propose a novel dialogue-level Safety Index (SI) and design a strategy-guided safety moderator capable of detecting latent malicious intent across turns. Contribution/Results: Experiments reveal that attack success rates against state-of-the-art MLLMs increase significantly with turn count, confirming the accumulation of safety vulnerabilities. Integrating our moderator substantially reduces multi-turn attack success across multiple open-source MLLMs, demonstrating robust defense against progressive adversarial exploitation. This work provides the first systematic characterization and mitigation of cumulative safety risks in multi-turn multimodal dialogue.
📝 Abstract
With the widespread use of multi-modal Large Language models (MLLMs), safety issues have become a growing concern. Multi-turn dialogues, which are more common in everyday interactions, pose a greater risk than single prompts; however, existing benchmarks do not adequately consider this situation. To encourage the community to focus on the safety issues of these models in multi-turn dialogues, we introduce SafeMT, a benchmark that features dialogues of varying lengths generated from harmful queries accompanied by images. This benchmark consists of 10,000 samples in total, encompassing 17 different scenarios and four jailbreak methods. Additionally, we propose Safety Index (SI) to evaluate the general safety of MLLMs during conversations. We assess the safety of 17 models using this benchmark and discover that the risk of successful attacks on these models increases as the number of turns in harmful dialogues rises. This observation indicates that the safety mechanisms of these models are inadequate for recognizing the hazard in dialogue interactions. We propose a dialogue safety moderator capable of detecting malicious intent concealed within conversations and providing MLLMs with relevant safety policies. Experimental results from several open-source models indicate that this moderator is more effective in reducing multi-turn ASR compared to existed guard models.