Does Unification Come at a Cost? Uni-SafeBench: A Safety Benchmark for Unified Multimodal Large Models

📅 2026-04-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the critical gap in systematically evaluating the safety of unified multimodal large language models (MLLMs), which concurrently support both understanding and generation but lack comprehensive risk assessment under their unified architectures. To this end, we introduce Uni-SafeBench, the first holistic safety evaluation benchmark tailored for unified MLLMs, encompassing six safety categories and seven task types. We further propose the Uni-Judger framework to disentangle contextual safety from intrinsic safety. Through multidimensional categorization, diverse tasks, and a hybrid evaluation methodology combining automated and human assessments, our empirical study reveals that unified architectures substantially compromise intrinsic model safety, with open-source unified models significantly underperforming their specialized counterparts. All resources are publicly released to foster the development of safer artificial general intelligence.
📝 Abstract
Unified Multimodal Large Models (UMLMs) integrate understanding and generation capabilities within a single architecture. While this architectural unification, driven by the deep fusion of multimodal features, enhances model performance, it also introduces important yet underexplored safety challenges. Existing safety benchmarks predominantly focus on isolated understanding or generation tasks, failing to evaluate the holistic safety of UMLMs when handling diverse tasks under a unified framework. To address this, we introduce Uni-SafeBench, a comprehensive benchmark featuring a taxonomy of six major safety categories across seven task types. To ensure rigorous assessment, we develop Uni-Judger, a framework that effectively decouples contextual safety from intrinsic safety. Based on comprehensive evaluations across Uni-SafeBench, we uncover that while the unification process enhances model capabilities, it significantly degrades the inherent safety of the underlying LLM. Furthermore, open-source UMLMs exhibit much lower safety performance than multimodal large models specialized for either generation or understanding tasks. We open-source all resources to systematically expose these risks and foster safer AGI development.
Problem

Research questions and friction points this paper is trying to address.

Unified Multimodal Large Models
Safety Benchmark
Model Unification
Multimodal Safety
Holistic Safety Evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified Multimodal Large Models
Safety Benchmark
Uni-SafeBench
Uni-Judger
Intrinsic Safety
🔎 Similar Papers
Z
Zixiang Peng
Institute of Information Engineering, Chinese Academy of Sciences; School of Cyber Security, University of Chinese Academy of Sciences
Y
Yongxiu Xu
Institute of Information Engineering, Chinese Academy of Sciences; School of Cyber Security, University of Chinese Academy of Sciences
Q
Qinyi Zhang
Institute of Automation, Chinese Academy of Sciences (CASIA); School of Artificial Intelligence, University of Chinese Academy of Sciences
J
Jiexun Shen
Institute of Information Engineering, Chinese Academy of Sciences; School of Cyber Security, University of Chinese Academy of Sciences
Yifan Zhang
Yifan Zhang
Professor of Economics, Chinese University of Hong Kong
Chinese EconomyInternational TradeEconomic Development
H
Hongbo Xu
Institute of Information Engineering, Chinese Academy of Sciences; School of Cyber Security, University of Chinese Academy of Sciences
Y
Yubin Wang
Institute of Information Engineering, Chinese Academy of Sciences; School of Cyber Security, University of Chinese Academy of Sciences
G
Gaopeng Gou
Institute of Information Engineering, Chinese Academy of Sciences; School of Cyber Security, University of Chinese Academy of Sciences