🤖 AI Summary
Existing VLM evaluations heavily rely on English-centric benchmarks or machine-translated multilingual data, causing cultural distortion and evaluation gaps for low-resource languages. This work introduces ML-VQA, the first large-scale, purely in-language multilingual vision-language benchmark, covering 18 languages, 14 academic disciplines, and 20,911 culturally adapted multiple-choice questions. It innovatively employs a native-cultural construction paradigm, cross-disciplinary question design, and global multilingual expert validation to ensure linguistic authenticity and cultural fidelity. ML-VQA enables unified assessment of visual understanding, language reasoning, and cultural commonsense integration, and supports zero-shot cross-lingual transfer analysis. Experiments systematically reveal—for the first time—that state-of-the-art multilingual VLMs exhibit a 32.7% accuracy drop on low-resource versus high-resource languages, with error rates rising by 41% on culture-sensitive items. This work provides a reproducible, culturally inclusive multimodal evaluation benchmark and diagnostic toolkit.
📝 Abstract
The evaluation of vision-language models (VLMs) has mainly relied on English-language benchmarks, leaving significant gaps in both multilingual and multicultural coverage. While multilingual benchmarks have expanded, both in size and languages, many rely on translations of English datasets, failing to capture cultural nuances. In this work, we propose Kaleidoscope, as the most comprehensive exam benchmark to date for the multilingual evaluation of vision-language models. Kaleidoscope is a large-scale, in-language multimodal benchmark designed to evaluate VLMs across diverse languages and visual inputs. Kaleidoscope covers 18 languages and 14 different subjects, amounting to a total of 20,911 multiple-choice questions. Built through an open science collaboration with a diverse group of researchers worldwide, Kaleidoscope ensures linguistic and cultural authenticity. We evaluate top-performing multilingual vision-language models and find that they perform poorly on low-resource languages and in complex multimodal scenarios. Our results highlight the need for progress on culturally inclusive multimodal evaluation frameworks.