MTMCS-Bench: Evaluating Contextual Safety of Multimodal Large Language Models in Multi-Turn Dialogues

📅 2026-01-11
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing benchmarks, which predominantly focus on single-turn interactions and fail to evaluate the progressive or context-switching safety risks arising from the coupling of visual scenes and conversational history in multimodal large language models (MLLMs). To this end, we propose the first context-aware safety evaluation framework tailored for multi-turn multimodal dialogue, introducing a benchmark dataset comprising over 30,000 image-text samples. Our framework defines three core metrics: contextual intent recognition, safety awareness on unsafe instances, and helpfulness on benign queries, and provides paired safe/unsafe samples along with a structured evaluation protocol. Experiments across 15 mainstream models reveal a pervasive trade-off between safety and utility—models either overlook incremental risks or excessively reject benign requests. Current safety mechanisms only partially mitigate these issues and remain inadequate for handling complex, multi-turn contextual safety challenges.

Technology Category

Application Category

📝 Abstract
Multimodal large language models (MLLMs) are increasingly deployed as assistants that interact through text and images, making it crucial to evaluate contextual safety when risk depends on both the visual scene and the evolving dialogue. Existing contextual safety benchmarks are mostly single-turn and often miss how malicious intent can emerge gradually or how the same scene can support both benign and exploitative goals. We introduce the Multi-Turn Multimodal Contextual Safety Benchmark (MTMCS-Bench), a benchmark of realistic images and multi-turn conversations that evaluates contextual safety in MLLMs under two complementary settings, escalation-based risk and context-switch risk. MTMCS-Bench offers paired safe and unsafe dialogues with structured evaluation. It contains over 30 thousand multimodal (image+text) and unimodal (text-only) samples, with metrics that separately measure contextual intent recognition, safety-awareness on unsafe cases, and helpfulness on benign ones. Across eight open-source and seven proprietary MLLMs, we observe persistent trade-offs between contextual safety and utility, with models tending to either miss gradual risks or over-refuse benign dialogues. Finally, we evaluate five current guardrails and find that they mitigate some failures but do not fully resolve multi-turn contextual risks.
Problem

Research questions and friction points this paper is trying to address.

contextual safety
multimodal large language models
multi-turn dialogues
risk escalation
context-switch risk
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-turn dialogue
multimodal large language models
contextual safety
escalation-based risk
context-switch risk
🔎 Similar Papers
No similar papers found.