🤖 AI Summary
This work addresses the critical gap in systematically evaluating the safety of unified multimodal large language models (MLLMs), which concurrently support both understanding and generation but lack comprehensive risk assessment under their unified architectures. To this end, we introduce Uni-SafeBench, the first holistic safety evaluation benchmark tailored for unified MLLMs, encompassing six safety categories and seven task types. We further propose the Uni-Judger framework to disentangle contextual safety from intrinsic safety. Through multidimensional categorization, diverse tasks, and a hybrid evaluation methodology combining automated and human assessments, our empirical study reveals that unified architectures substantially compromise intrinsic model safety, with open-source unified models significantly underperforming their specialized counterparts. All resources are publicly released to foster the development of safer artificial general intelligence.
📝 Abstract
Unified Multimodal Large Models (UMLMs) integrate understanding and generation capabilities within a single architecture. While this architectural unification, driven by the deep fusion of multimodal features, enhances model performance, it also introduces important yet underexplored safety challenges. Existing safety benchmarks predominantly focus on isolated understanding or generation tasks, failing to evaluate the holistic safety of UMLMs when handling diverse tasks under a unified framework. To address this, we introduce Uni-SafeBench, a comprehensive benchmark featuring a taxonomy of six major safety categories across seven task types. To ensure rigorous assessment, we develop Uni-Judger, a framework that effectively decouples contextual safety from intrinsic safety. Based on comprehensive evaluations across Uni-SafeBench, we uncover that while the unification process enhances model capabilities, it significantly degrades the inherent safety of the underlying LLM. Furthermore, open-source UMLMs exhibit much lower safety performance than multimodal large models specialized for either generation or understanding tasks. We open-source all resources to systematically expose these risks and foster safer AGI development.