MMDT: Decoding the Trustworthiness and Safety of Multimodal Foundation Models

📅 2025-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses critical trustworthiness deficiencies—spanning safety, fairness, privacy, adversarial robustness, and out-of-distribution (OOD) generalization—in multimodal foundation models (MMFMs). To this end, we propose MMDT, the first unified evaluation framework for MMFM trustworthiness. MMDT systematically assesses six dimensions: safety, hallucination, fairness, privacy, adversarial robustness, and OOD generalization. It introduces novel methodologies including task-adaptive red-teaming attacks, construction of high-difficulty multi-scenario benchmarks, cross-modal bias quantification, and multimodal consistency analysis. Comprehensive evaluation using MMDT uncovers pervasive security vulnerabilities across state-of-the-art MMFMs. We provide a fully reproducible evaluation protocol and an open-source platform (mmdecodingtrust.github.io), thereby filling a fundamental gap in trustworthy MMFM assessment. MMDT advances multimodal AI toward verifiable, auditable, and reliable systems.

Technology Category

Application Category

📝 Abstract
Multimodal foundation models (MMFMs) play a crucial role in various applications, including autonomous driving, healthcare, and virtual assistants. However, several studies have revealed vulnerabilities in these models, such as generating unsafe content by text-to-image models. Existing benchmarks on multimodal models either predominantly assess the helpfulness of these models, or only focus on limited perspectives such as fairness and privacy. In this paper, we present the first unified platform, MMDT (Multimodal DecodingTrust), designed to provide a comprehensive safety and trustworthiness evaluation for MMFMs. Our platform assesses models from multiple perspectives, including safety, hallucination, fairness/bias, privacy, adversarial robustness, and out-of-distribution (OOD) generalization. We have designed various evaluation scenarios and red teaming algorithms under different tasks for each perspective to generate challenging data, forming a high-quality benchmark. We evaluate a range of multimodal models using MMDT, and our findings reveal a series of vulnerabilities and areas for improvement across these perspectives. This work introduces the first comprehensive and unique safety and trustworthiness evaluation platform for MMFMs, paving the way for developing safer and more reliable MMFMs and systems. Our platform and benchmark are available at https://mmdecodingtrust.github.io/.
Problem

Research questions and friction points this paper is trying to address.

Evaluates safety and trustworthiness of multimodal foundation models
Identifies vulnerabilities like unsafe content generation and biases
Provides a unified platform for comprehensive model assessment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified platform for multimodal model evaluation
Comprehensive safety and trustworthiness assessment
Red teaming algorithms for vulnerability detection
🔎 Similar Papers
No similar papers found.
Chejian Xu
Chejian Xu
University of Illinois at Urbana-Champaign
Large Language ModelTrustworthy AI
J
Jiawei Zhang
University of Chicago
Zhaorun Chen
Zhaorun Chen
Ph.D. Student, UChicago CS
AI SafetyLLM AgentReinforcement Learning
Chulin Xie
Chulin Xie
Google DeepMind
Machine LearningOptimization
Mintong Kang
Mintong Kang
UIUC
Machine Learning
Yujin Potter
Yujin Potter
UC Berkeley
AI AlignmentAI Safety
Zhun Wang
Zhun Wang
Graduate Student, UC Berkeley
Z
Zhuowen Yuan
University of Illinois at Urbana-Champaign
A
Alexander Xiong
University of California, Berkeley
Zidi Xiong
Zidi Xiong
Harvard University
Trustworthy machine learning
C
Chenhui Zhang
Massachusetts Institute of Technology
Lingzhi Yuan
Lingzhi Yuan
PhD at University of Maryland, College Park & BEng at Zhejiang University
Trustworsy MLAI SafetyAdversarial Robustness
Y
Yi Zeng
Virginia Tech
Peiyang Xu
Peiyang Xu
Princeton University
Trustworthy Machine Learning
Chengquan Guo
Chengquan Guo
University of Chicago
Software EngineeringLLM
A
Andy Zhou
University of Illinois at Urbana-Champaign
J
Jeffrey Ziwei Tan
University of California, Berkeley
Xuandong Zhao
Xuandong Zhao
UC Berkeley
Machine LearningNatural Language ProcessingAI Safety
Francesco Pinto
Francesco Pinto
Research Scientist, Google Deepmind
Agentic AI Safety and Security
Zhen Xiang
Zhen Xiang
University of Georgia
machine learning
Yu Gai
Yu Gai
University of California, Berkeley
Zinan Lin
Zinan Lin
Microsoft Research (Redmond), Carnegie Mellon University
machine learningprivacy
Dan Hendrycks
Dan Hendrycks
Director of the Center for AI Safety (advisor for xAI and Scale)
AI SafetyML Reliability
B
Bo Li
University of Illinois at Urbana-Champaign, University of Chicago
Dawn Song
Dawn Song
Professor of Computer Science, UC Berkeley
Computer Security and Privacy