Kaleidoscope: In-language Exams for Massively Multilingual Vision Evaluation

📅 2025-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing VLM evaluations heavily rely on English-centric benchmarks or machine-translated multilingual data, causing cultural distortion and evaluation gaps for low-resource languages. This work introduces ML-VQA, the first large-scale, purely in-language multilingual vision-language benchmark, covering 18 languages, 14 academic disciplines, and 20,911 culturally adapted multiple-choice questions. It innovatively employs a native-cultural construction paradigm, cross-disciplinary question design, and global multilingual expert validation to ensure linguistic authenticity and cultural fidelity. ML-VQA enables unified assessment of visual understanding, language reasoning, and cultural commonsense integration, and supports zero-shot cross-lingual transfer analysis. Experiments systematically reveal—for the first time—that state-of-the-art multilingual VLMs exhibit a 32.7% accuracy drop on low-resource versus high-resource languages, with error rates rising by 41% on culture-sensitive items. This work provides a reproducible, culturally inclusive multimodal evaluation benchmark and diagnostic toolkit.

Technology Category

Application Category

📝 Abstract
The evaluation of vision-language models (VLMs) has mainly relied on English-language benchmarks, leaving significant gaps in both multilingual and multicultural coverage. While multilingual benchmarks have expanded, both in size and languages, many rely on translations of English datasets, failing to capture cultural nuances. In this work, we propose Kaleidoscope, as the most comprehensive exam benchmark to date for the multilingual evaluation of vision-language models. Kaleidoscope is a large-scale, in-language multimodal benchmark designed to evaluate VLMs across diverse languages and visual inputs. Kaleidoscope covers 18 languages and 14 different subjects, amounting to a total of 20,911 multiple-choice questions. Built through an open science collaboration with a diverse group of researchers worldwide, Kaleidoscope ensures linguistic and cultural authenticity. We evaluate top-performing multilingual vision-language models and find that they perform poorly on low-resource languages and in complex multimodal scenarios. Our results highlight the need for progress on culturally inclusive multimodal evaluation frameworks.
Problem

Research questions and friction points this paper is trying to address.

Addressing gaps in multilingual vision-language model evaluation
Providing culturally authentic multimodal benchmark for diverse languages
Highlighting poor performance in low-resource languages and complex scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multilingual in-language multimodal benchmark for VLMs
Covers 18 languages and 14 diverse subjects
Ensures linguistic and cultural authenticity globally
🔎 Similar Papers
No similar papers found.
I
Israfel Salazar
Department of Computer Science, University of Copenhagen
M
Manuel Fernández Burda
Institute of Computer Sciences, CONICET & Universidad de Buenos Aires
S
Shayekh Bin Islam
Cohere For AI Community
Arshia Soltani Moakhar
Arshia Soltani Moakhar
University of Maryland
Theory of RobustnessInterpretability
S
Shivalika Singh
Cohere For AI
Fabian Farestam
Fabian Farestam
ETH Zürich
games on graphsllm evaluations
Angelika Romanou
Angelika Romanou
EPFL
Natural Language ProcessingMachine LearningAI
D
Danylo Boiko
Cohere For AI Community,Taras Shevchenko National University of Kyiv
D
Dipika Khullar
Cohere For AI Community
Mike Zhang
Mike Zhang
Aalborg University (Copenhagen)
Artificial IntelligenceNatural Language ProcessingInformation ExtractionNLP Applications
D
Dominik Krzemiński
Cohere For AI Community
Jekaterina Novikova
Jekaterina Novikova
Vanguard Group
Natural Language ProcessingTrustworthy AIMachine Learning for Health
L
Luísa Shimabucoro
University of São Paulo
J
Joseph Marvin Imperial
National University Philippines
Rishabh Maheshwary
Rishabh Maheshwary
Applied Scientist, ServiceNow
Machine LearningDeep LearningNatural Language Processing
Sharad Duwal
Sharad Duwal
Unknown affiliation
graph representation learningsoft error reliabilityML interpretability
Alfonso Amayuelas
Alfonso Amayuelas
University of California, Santa Barbara
Artificial IntelligenceNatural Language ProcessingMachine LearningLarge Language Models
S
Swati Rajwal
Emory University
J
Jebish Purbey
Cohere For AI Community,M2ai.in
A
Ahmed Ruby
Uppsala University
M
Marek Suppa
Cisco,Comenius University in Bratislava
Azmine Toushik Wasi
Azmine Toushik Wasi
Shahjalal University of Science and Technology
Machine LearningAI Agents & ReasoningHealth InformaticsGraph Neural NetworksHCI-HAI & Safety
R
Ram Mohan Rao Kadiyala
M2ai.in,Traversaal.ai
Olga Tsymboi
Olga Tsymboi
T-Tech
M
Maksim Kostritsya
HSE University (Higher School of Economics),RAFT
B
Bardia Soltani Moakhar
Cohere For AI Community
G
Gabriel da Costa Merlin
University of São Paulo
O
Otávio Ferracioli Coletti
University of São Paulo
M
Maral Jabbari Shiviari
Cohere For AI Community
M
MohammadAmin farahani fard
Cohere For AI Community
S
Silvia Fernandez
Cohere For AI Community
María Grandury
María Grandury
SomosNLP / Polytechnical University of Madrid
Natural Language ProcessingLLM Evaluation
Dmitry Abulkhanov
Dmitry Abulkhanov
Huawei Noah's Ark Lab
Computer Science
D
Drishti Sharma
Cohere For AI Community,M2ai.in
A
Andre Guarnier De Mitri
University of São Paulo
L
Leticia Bossatto Marchezi
Federal University of São Carlos
Johan Obando-Ceron
Johan Obando-Ceron
Mila, University of Montreal
Deep LearningReinforcement LearningMachine LearningArtificial Intelligence
N
Nazar Kohut
Lviv Polytechnic National University
B
Beyza Ermis
Cohere For AI
Desmond Elliott
Desmond Elliott
Associate Professor, University of Copenhagen
Natural Language ProcessingVision-LanguageTokenization-free Language Models
Enzo Ferrante
Enzo Ferrante
CONICET & Universidad de Buenos Aires
Medical ImagingMachine LearningComputer VisionML Fairness
Sara Hooker
Sara Hooker
Head of Cohere For AI
Machine learning efficiencyrobustnessinterpretabilitytrustworthy ML
Marzieh Fadaee
Marzieh Fadaee
Staff Research Scientist, Cohere Labs
Computational LinguisticsMachine LearningNatural Language ProcessingMultilingual NLP