MULTI: Multimodal Understanding Leaderboard with Text and Images

📅 2024-02-05
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the limited multimodal reasoning, complex inference, and cross-modal knowledge integration capabilities of multimodal large language models (MLLMs) in authentic Chinese standardized testing scenarios. To this end, we introduce MULTI—the first large-scale, exam-based multimodal evaluation benchmark comprising over 18,000 image-text questions—along with two challenging subsets: MULTI-Elite (high-difficulty items) and MULTI-Extend (knowledge-augmented context). We propose an education-oriented, high-fidelity evaluation framework featuring a rigorously calibrated expert human baseline (86.1%) and a scalable in-context learning assessment protocol. Experimental results reveal that the state-of-the-art model Qwen2-VL-72B achieves only 76.9%, substantially underperforming humans and exposing critical bottlenecks in deep reasoning and cross-modal knowledge fusion. MULTI thus establishes a reliable, expert-level benchmark for advancing MLLMs in educational AI.

Technology Category

Application Category

📝 Abstract
The rapid development of multimodal large language models (MLLMs) raises the question of how they compare to human performance. While existing datasets often feature synthetic or overly simplistic tasks, some models have already surpassed human expert baselines. In this paper, we present MULTI, a Chinese multimodal dataset derived from authentic examination questions. Comprising over 18,000 carefully selected and refined questions, MULTI evaluates models using real-world examination standards, encompassing image-text comprehension, complex reasoning, and knowledge recall. Additionally, We also introduce MULTI-Elite, a 500-question selected hard subset, and MULTI-Extend with more than 4,500 external knowledge context pieces for testing in-context learning capabilities. Our evaluation highlights substantial room for MLLM advancement, with Qwen2-VL-72B achieving a 76.9% accuracy on MULTI and 53.1% on MULTI-Elite leading 25 evaluated models, compared to human expert baselines of 86.1% and 73.1%. MULTI serves not only as a robust evaluation platform but also paves the way for the development of expert-level AI.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Complex Chinese Exam Questions
Human Expert Performance Gap
Innovation

Methods, ideas, or system contributions that make the work stand out.

MULTI Dataset
Contextual Learning
Expert-Level Capability Benchmark
Zichen Zhu
Zichen Zhu
Shanghai Jiao Tong University
GUI智能体,多模态大模型,人机交互
Y
Yang Xu
X-LANCE Lab, Department of Computer Science and Engineering, MoE Key Lab of Artificial Intelligence, SJTU AI Institute, Shanghai Jiao Tong University, Shanghai, China
L
Lu Chen
X-LANCE Lab, Department of Computer Science and Engineering, MoE Key Lab of Artificial Intelligence, SJTU AI Institute, Shanghai Jiao Tong University, Shanghai, China; Suzhou Laboratory, Suzhou, China
J
Jingkai Yang
X-LANCE Lab, Department of Computer Science and Engineering, MoE Key Lab of Artificial Intelligence, SJTU AI Institute, Shanghai Jiao Tong University, Shanghai, China
Yichuan Ma
Yichuan Ma
Fudan University
LLMSynthetic Data
Y
Yiming Sun
X-LANCE Lab, Department of Computer Science and Engineering, MoE Key Lab of Artificial Intelligence, SJTU AI Institute, Shanghai Jiao Tong University, Shanghai, China
H
Hailin Wen
X-LANCE Lab, Department of Computer Science and Engineering, MoE Key Lab of Artificial Intelligence, SJTU AI Institute, Shanghai Jiao Tong University, Shanghai, China
J
Jiaqi Liu
X-LANCE Lab, Department of Computer Science and Engineering, MoE Key Lab of Artificial Intelligence, SJTU AI Institute, Shanghai Jiao Tong University, Shanghai, China
Jinyu Cai
Jinyu Cai
X-LANCE Lab, Department of Computer Science and Engineering, MoE Key Lab of Artificial Intelligence, SJTU AI Institute, Shanghai Jiao Tong University, Shanghai, China
Yingzi Ma
Yingzi Ma
PhD student, University of Wisconsin Madison
NLPVLM
Situo Zhang
Situo Zhang
Shanghai Jiao Tong University
Large Language ModelsReinforcement Learning
Zihan Zhao
Zihan Zhao
Shanghai Jiao Tong University
NLP
Liangtai Sun
Liangtai Sun
Master, Shanghai Jiao Tong University
NLPGUI understandingMulti-modal
K
Kai Yu
X-LANCE Lab, Department of Computer Science and Engineering, MoE Key Lab of Artificial Intelligence, SJTU AI Institute, Shanghai Jiao Tong University, Shanghai, China; Suzhou Laboratory, Suzhou, China