M4U: Evaluating Multilingual Understanding and Reasoning for Large Multimodal Models

📅 2024-05-24
🏛️ arXiv.org
📈 Citations: 6
Influential: 0
📄 PDF
🤖 AI Summary
Existing multilingual multimodal benchmarks fail to discriminate model capabilities effectively, as purely language-based models achieve high scores, undermining evaluation of cross-lingual vision–language joint reasoning. To address this, we introduce M4U—the first rigorous benchmark for multilingual multimodal understanding—covering six languages, 16 subdomains, and 64 disciplines, with 10,000 high-quality samples. Methodologically, M4U pioneers multilingual multimodal prompt engineering, cross-lingual consistency evaluation, discipline-balanced sampling, and vision–text semantic alignment verification. Key findings reveal that GPT-4o achieves only 47.6% average accuracy; all mainstream multimodal large language models exhibit significant language bias, with cross-lingual joint reasoning performance dropping by up to 23.5%. M4U is the first to systematically expose fundamental failures in multidisciplinary, multilingual, and multimodal collaborative reasoning, while providing a fine-grained framework for analyzing linguistic preferences—thereby overcoming critical limitations of prior benchmarks.

Technology Category

Application Category

📝 Abstract
Multilingual capability is an essential aspect for large multimodal models, since they are usually deployed across various countries and languages. However, most existing benchmarks for multilingual multimodal reasoning struggle to differentiate between models of varying performance; even language models without visual capabilities can easily achieve high scores. This leaves a comprehensive evaluation of leading multilingual multimodal models largely unexplored. In this work, we introduce M4U, a novel and challenging benchmark for assessing the capability of multi-discipline multilingual multimodal understanding and reasoning. M4U contains 10k samples covering 64 disciplines across 16 subfields in Science, Engineering, and Healthcare in six languages. Using M4U, we conduct extensive evaluations of leading Large Multimodal Models (LMMs) and Large Language Models (LLMs) with external tools. The evaluation results demonstrate that the state-of-the-art model, GPT-4o, achieves only 47.6% average accuracy on M4U. Additionally, we observe that the leading LMMs exhibit significant language preferences. Our in-depth analysis indicates that leading LMMs, including GPT-4o, struggle to perform reasoning using multilingual information present in both visual and textual context. Specifically, they suffer performance degradation when prompted with cross-lingual multimodal questions. Our code and dataset is public available.
Problem

Research questions and friction points this paper is trying to address.

Evaluating multilingual multimodal understanding and reasoning in models
Addressing lack of challenging benchmarks for multilingual multimodal performance
Assessing cross-lingual reasoning with visual and textual context
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces M4U benchmark for multilingual multimodal evaluation
Covers 64 disciplines in 16 subfields across six languages
Evaluates models using cross-lingual multimodal reasoning tasks
🔎 Similar Papers
No similar papers found.
H
Hongyu Wang
Institute of Computing Technology, Chinese Academy of Sciences
Jiayu Xu
Jiayu Xu
Oregon State University
Cryptography
Senwei Xie
Senwei Xie
ict, cas
Embodied AI
Ruiping Wang
Ruiping Wang
Professor, Institute of Computing Technology, Chinese Academy of Sciences
Computer VisionPattern RecognitionMachine Learning
J
Jialin Li
Institute of Computing Technology, Chinese Academy of Sciences
Zhaojie Xie
Zhaojie Xie
Institute of Computing Technology, Chinese Academy of Sciences
B
Bin Zhang
Institute of Computing Technology, Chinese Academy of Sciences
C
Chuyan Xiong
Institute of Computing Technology, Chinese Academy of Sciences
X
Xilin Chen
Institute of Computing Technology, Chinese Academy of Sciences