🤖 AI Summary
Existing safety evaluation benchmarks for multimodal large language models (MLLMs) suffer from insufficient modality coverage and inadequate risk dimensionality. Method: We introduce OutSafe-Bench—the first comprehensive multimodal content safety benchmark tailored for the MLLM era—covering text, image, audio, and video modalities, with 18,000 bilingual prompts and thousands of multimedia samples, systematically annotated across nine safety risk categories. We propose Multidimensional Cross-Risk Scoring (MCRS) to model inter-category risk correlations and design FairScore, an interpretable, automated multi-reviewer weighted aggregation framework leveraging high-performance models as an adaptive reviewer ensemble. Contribution/Results: Evaluated on nine mainstream MLLMs, OutSafe-Bench demonstrates high sensitivity and effectiveness, significantly exposing critical safety vulnerabilities in current models. It establishes foundational infrastructure for rigorous content safety assessment and robustness research in multimodal AI.
📝 Abstract
Since Multimodal Large Language Models (MLLMs) are increasingly being integrated into everyday tools and intelligent agents, growing concerns have arisen regarding their possible output of unsafe contents, ranging from toxic language and biased imagery to privacy violations and harmful misinformation. Current safety benchmarks remain highly limited in both modality coverage and performance evaluations, often neglecting the extensive landscape of content safety. In this work, we introduce OutSafe-Bench, the first most comprehensive content safety evaluation test suite designed for the multimodal era. OutSafe-Bench includes a large-scale dataset that spans four modalities, featuring over 18,000 bilingual (Chinese and English) text prompts, 4,500 images, 450 audio clips and 450 videos, all systematically annotated across nine critical content risk categories. In addition to the dataset, we introduce a Multidimensional Cross Risk Score (MCRS), a novel metric designed to model and assess overlapping and correlated content risks across different categories. To ensure fair and robust evaluation, we propose FairScore, an explainable automated multi-reviewer weighted aggregation framework. FairScore selects top-performing models as adaptive juries, thereby mitigating biases from single-model judgments and enhancing overall evaluation reliability. Our evaluation of nine state-of-the-art MLLMs reveals persistent and substantial safety vulnerabilities, underscoring the pressing need for robust safeguards in MLLMs.