🤖 AI Summary
Existing MLLM safety evaluation benchmarks suffer from low data quality, narrow risk coverage, and limited modality combinations; critically, they evaluate harmful-query vulnerability and benign-input oversensitivity in isolation—leading to inflated and contradictory results. This paper introduces USB (Unified Safety Benchmark), the first framework to jointly assess both vulnerability and oversensitivity within a single, coherent evaluation paradigm. USB features a high-quality, synthetically generated dataset spanning 61 fine-grained risk categories, four modality combinations (text-only, image-only, text–image, and interleaved text–image), and bilingual (Chinese–English) support. We further propose a quantitative completeness metric grounded in risk dimensions and modality diversity. As the most comprehensive MLLM safety benchmark to date (61 × 4 × 2 × 2), USB significantly improves safety issue detection rates and evaluation consistency.
📝 Abstract
Despite their remarkable achievements and widespread adoption, Multimodal Large Language Models (MLLMs) have revealed significant security vulnerabilities, highlighting the urgent need for robust safety evaluation benchmarks. Existing MLLM safety benchmarks, however, fall short in terms of data quality and coverge, and modal risk combinations, resulting in inflated and contradictory evaluation results, which hinders the discovery and governance of security concerns. Besides, we argue that vulnerabilities to harmful queries and oversensitivity to harmless ones should be considered simultaneously in MLLMs safety evaluation, whereas these were previously considered separately. In this paper, to address these shortcomings, we introduce Unified Safety Benchmarks (USB), which is one of the most comprehensive evaluation benchmarks in MLLM safety. Our benchmark features high-quality queries, extensive risk categories, comprehensive modal combinations, and encompasses both vulnerability and oversensitivity evaluations. From the perspective of two key dimensions: risk categories and modality combinations, we demonstrate that the available benchmarks -- even the union of the vast majority of them -- are far from being truly comprehensive. To bridge this gap, we design a sophisticated data synthesis pipeline that generates extensive, high-quality complementary data addressing previously unexplored aspects. By combining open-source datasets with our synthetic data, our benchmark provides 4 distinct modality combinations for each of the 61 risk sub-categories, covering both English and Chinese across both vulnerability and oversensitivity dimensions.