🤖 AI Summary
Existing multilingual multimodal benchmarks fail to discriminate model capabilities effectively, as purely language-based models achieve high scores, undermining evaluation of cross-lingual vision–language joint reasoning. To address this, we introduce M4U—the first rigorous benchmark for multilingual multimodal understanding—covering six languages, 16 subdomains, and 64 disciplines, with 10,000 high-quality samples. Methodologically, M4U pioneers multilingual multimodal prompt engineering, cross-lingual consistency evaluation, discipline-balanced sampling, and vision–text semantic alignment verification. Key findings reveal that GPT-4o achieves only 47.6% average accuracy; all mainstream multimodal large language models exhibit significant language bias, with cross-lingual joint reasoning performance dropping by up to 23.5%. M4U is the first to systematically expose fundamental failures in multidisciplinary, multilingual, and multimodal collaborative reasoning, while providing a fine-grained framework for analyzing linguistic preferences—thereby overcoming critical limitations of prior benchmarks.
📝 Abstract
Multilingual capability is an essential aspect for large multimodal models, since they are usually deployed across various countries and languages. However, most existing benchmarks for multilingual multimodal reasoning struggle to differentiate between models of varying performance; even language models without visual capabilities can easily achieve high scores. This leaves a comprehensive evaluation of leading multilingual multimodal models largely unexplored. In this work, we introduce M4U, a novel and challenging benchmark for assessing the capability of multi-discipline multilingual multimodal understanding and reasoning. M4U contains 10k samples covering 64 disciplines across 16 subfields in Science, Engineering, and Healthcare in six languages. Using M4U, we conduct extensive evaluations of leading Large Multimodal Models (LMMs) and Large Language Models (LLMs) with external tools. The evaluation results demonstrate that the state-of-the-art model, GPT-4o, achieves only 47.6% average accuracy on M4U. Additionally, we observe that the leading LMMs exhibit significant language preferences. Our in-depth analysis indicates that leading LMMs, including GPT-4o, struggle to perform reasoning using multilingual information present in both visual and textual context. Specifically, they suffer performance degradation when prompted with cross-lingual multimodal questions. Our code and dataset is public available.