Global MMLU: Understanding and Addressing Cultural and Linguistic Biases in Multilingual Evaluation

📅 2024-12-04
🏛️ arXiv.org
📈 Citations: 7
Influential: 3
📄 PDF
🤖 AI Summary
To address evaluation inaccuracies in multilingual benchmarks like MMLU—stemming from cultural bias and translation distortion—this paper introduces Global MMLU, the first benchmark explicitly designed for linguistic fairness and cultural adaptation. Covering 42 languages, it establishes a cross-lingual cultural knowledge audit protocol and a dual-dimension cultural sensitivity annotation framework (geographic and commonsense). Quantitative analysis reveals that 28% of MMLU items are culturally dependent and 84.9% of geography questions exhibit Euro-American centrism. Global MMLU pioneers a culturally sensitive/insensitive item dichotomy, challenging the “translation-as-adaptation” paradigm. Leveraging professional–community collaborative, multi-stage human verification, we release a high-quality dataset that substantially corrects ranking distortions among mainstream models: state-of-the-art models exhibit up to 12.7 percentage points of performance misestimation due to unmitigated cultural bias.

Technology Category

Application Category

📝 Abstract
Cultural biases in multilingual datasets pose significant challenges for their effectiveness as global benchmarks. These biases stem not only from differences in language but also from the cultural knowledge required to interpret questions, reducing the practical utility of translated datasets like MMLU. Furthermore, translation often introduces artefacts that can distort the meaning or clarity of questions in the target language. A common practice in multilingual evaluation is to rely on machine-translated evaluation sets, but simply translating a dataset is insufficient to address these challenges. In this work, we trace the impact of both of these issues on multilingual evaluations and ensuing model performances. Our large-scale evaluation of state-of-the-art open and proprietary models illustrates that progress on MMLU depends heavily on learning Western-centric concepts, with 28% of all questions requiring culturally sensitive knowledge. Moreover, for questions requiring geographic knowledge, an astounding 84.9% focus on either North American or European regions. Rankings of model evaluations change depending on whether they are evaluated on the full portion or the subset of questions annotated as culturally sensitive, showing the distortion to model rankings when blindly relying on translated MMLU. We release Global MMLU, an improved MMLU with evaluation coverage across 42 languages -- with improved overall quality by engaging with compensated professional and community annotators to verify translation quality while also rigorously evaluating cultural biases present in the original dataset. This comprehensive Global MMLU set also includes designated subsets labeled as culturally sensitive and culturally agnostic to allow for more holistic, complete evaluation.
Problem

Research questions and friction points this paper is trying to address.

Address cultural biases in multilingual datasets
Improve translation quality in language evaluations
Enhance MMLU for global model assessments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Engaged professional translation verification
Identified culturally sensitive subsets
Expanded multilingual evaluation coverage
🔎 Similar Papers
No similar papers found.
S
Shivalika Singh
Cohere For AI
Angelika Romanou
Angelika Romanou
EPFL
Natural Language ProcessingMachine LearningAI
Clémentine Fourrier
Clémentine Fourrier
HuggingFace
D
D. Adelani
Mila, McGill University & Canada CIFAR AI Chair
J
Jian Gang Ngui
AI Singapore, National University of Singapore
Daniel Vila-Suero
Daniel Vila-Suero
Hugging Face
Peerat Limkonchotiwat
Peerat Limkonchotiwat
Research Fellow, AI Singapore, National University of Singapore
Evaluation and BenchmarkRepresentation LearningLarge Language ModelMultilingual Learning
Kelly Marchisio
Kelly Marchisio
Cohere
multilingualitymultilingual NLPmachine translationmachine learningnatural language processing
W
Wei Qi Leong
AI Singapore, National University of Singapore
Y
Yosephine Susanto
AI Singapore, National University of Singapore
Raymond Ng
Raymond Ng
University of British Columbia
data mininghealth informaticsgenomicsNLPtext mining
Shayne Longpre
Shayne Longpre
MIT, Stanford, Apple
Deep LearningNatural Language Understanding
W
Wei-Yin Ko
Cohere
M
Madeline Smith
Cohere For AI
Antoine Bosselut
Antoine Bosselut
EPFL
Natural Language ProcessingMachine LearningCommonsense Representation and Reasoning
Alice Oh
Alice Oh
KAIST Computer Science
machine learningNLPcomputational social science
A
André F. T. Martins
Instituto de Telecomunicações, Instituto Superior Técnico, Universidade de Lisboa
Leshem Choshen
Leshem Choshen
MIT, IBM AI research
Model RecyclingEvolving Collaborative PretrainingEvaluationModel MergingOpen the Black Box
Daphne Ippolito
Daphne Ippolito
Carnegie Mellon University
natural language processing
Enzo Ferrante
Enzo Ferrante
CONICET & Universidad de Buenos Aires
Medical ImagingMachine LearningComputer VisionML Fairness
Marzieh Fadaee
Marzieh Fadaee
Staff Research Scientist, Cohere Labs
Computational LinguisticsMachine LearningNatural Language ProcessingMultilingual NLP
B
B. Ermiş
Cohere For AI
Sara Hooker
Sara Hooker
Head of Cohere For AI
Machine learning efficiencyrobustnessinterpretabilitytrustworthy ML