🤖 AI Summary
To address evaluation inaccuracies in multilingual benchmarks like MMLU—stemming from cultural bias and translation distortion—this paper introduces Global MMLU, the first benchmark explicitly designed for linguistic fairness and cultural adaptation. Covering 42 languages, it establishes a cross-lingual cultural knowledge audit protocol and a dual-dimension cultural sensitivity annotation framework (geographic and commonsense). Quantitative analysis reveals that 28% of MMLU items are culturally dependent and 84.9% of geography questions exhibit Euro-American centrism. Global MMLU pioneers a culturally sensitive/insensitive item dichotomy, challenging the “translation-as-adaptation” paradigm. Leveraging professional–community collaborative, multi-stage human verification, we release a high-quality dataset that substantially corrects ranking distortions among mainstream models: state-of-the-art models exhibit up to 12.7 percentage points of performance misestimation due to unmitigated cultural bias.
📝 Abstract
Cultural biases in multilingual datasets pose significant challenges for their effectiveness as global benchmarks. These biases stem not only from differences in language but also from the cultural knowledge required to interpret questions, reducing the practical utility of translated datasets like MMLU. Furthermore, translation often introduces artefacts that can distort the meaning or clarity of questions in the target language. A common practice in multilingual evaluation is to rely on machine-translated evaluation sets, but simply translating a dataset is insufficient to address these challenges. In this work, we trace the impact of both of these issues on multilingual evaluations and ensuing model performances. Our large-scale evaluation of state-of-the-art open and proprietary models illustrates that progress on MMLU depends heavily on learning Western-centric concepts, with 28% of all questions requiring culturally sensitive knowledge. Moreover, for questions requiring geographic knowledge, an astounding 84.9% focus on either North American or European regions. Rankings of model evaluations change depending on whether they are evaluated on the full portion or the subset of questions annotated as culturally sensitive, showing the distortion to model rankings when blindly relying on translated MMLU. We release Global MMLU, an improved MMLU with evaluation coverage across 42 languages -- with improved overall quality by engaging with compensated professional and community annotators to verify translation quality while also rigorously evaluating cultural biases present in the original dataset. This comprehensive Global MMLU set also includes designated subsets labeled as culturally sensitive and culturally agnostic to allow for more holistic, complete evaluation.