🤖 AI Summary
This study addresses the limited multimodal reasoning, complex inference, and cross-modal knowledge integration capabilities of multimodal large language models (MLLMs) in authentic Chinese standardized testing scenarios. To this end, we introduce MULTI—the first large-scale, exam-based multimodal evaluation benchmark comprising over 18,000 image-text questions—along with two challenging subsets: MULTI-Elite (high-difficulty items) and MULTI-Extend (knowledge-augmented context). We propose an education-oriented, high-fidelity evaluation framework featuring a rigorously calibrated expert human baseline (86.1%) and a scalable in-context learning assessment protocol. Experimental results reveal that the state-of-the-art model Qwen2-VL-72B achieves only 76.9%, substantially underperforming humans and exposing critical bottlenecks in deep reasoning and cross-modal knowledge fusion. MULTI thus establishes a reliable, expert-level benchmark for advancing MLLMs in educational AI.
📝 Abstract
The rapid development of multimodal large language models (MLLMs) raises the question of how they compare to human performance. While existing datasets often feature synthetic or overly simplistic tasks, some models have already surpassed human expert baselines. In this paper, we present MULTI, a Chinese multimodal dataset derived from authentic examination questions. Comprising over 18,000 carefully selected and refined questions, MULTI evaluates models using real-world examination standards, encompassing image-text comprehension, complex reasoning, and knowledge recall. Additionally, We also introduce MULTI-Elite, a 500-question selected hard subset, and MULTI-Extend with more than 4,500 external knowledge context pieces for testing in-context learning capabilities. Our evaluation highlights substantial room for MLLM advancement, with Qwen2-VL-72B achieving a 76.9% accuracy on MULTI and 53.1% on MULTI-Elite leading 25 evaluated models, compared to human expert baselines of 86.1% and 73.1%. MULTI serves not only as a robust evaluation platform but also paves the way for the development of expert-level AI.