MEXA: Multilingual Evaluation of English-Centric LLMs via Cross-Lingual Alignment

📅 2024-10-08
🏛️ arXiv.org
📈 Citations: 7
Influential: 1
📄 PDF
🤖 AI Summary
Current multilingual evaluations of English-centric large language models (LLMs) suffer from limited language coverage and narrow task diversity, lacking systematic assessment. Method: We propose MEXA—a zero-shot, fine-tuning-free multilingual capability evaluation framework that quantifies cross-lingual alignment between English and non-English representations in intermediate transformer layers, using parallel corpora (e.g., FLORES-200, Bible) and adapting decoder-only architectures for multi-strategy representation extraction. Contribution/Results: Evaluated on Belebele, m-MMLU, and other benchmarks across over 100 languages, MEXA achieves an average Pearson correlation of 0.90 with human-annotated performance scores—substantially outperforming prior methods. It eliminates the need for language-specific annotations or supervised adaptation, offering a scalable, interpretable, and architecture-agnostic paradigm for multilingual LLM evaluation.

Technology Category

Application Category

📝 Abstract
English-centric large language models (LLMs) often show strong multilingual capabilities. However, their multilingual performance remains unclear and is under-evaluated for many other languages. Most benchmarks for multilinguality focus on classic NLP tasks or cover a minimal number of languages. We introduce MEXA, a method for assessing the multilingual capabilities of pre-trained English-centric LLMs using parallel sentences, which are available for more languages than existing downstream tasks. MEXA leverages that English-centric LLMs use English as a pivot language in their intermediate layers. MEXA computes the alignment between English and non-English languages using parallel sentences to evaluate the transfer of language understanding from English to other languages. This alignment can be used to estimate model performance in different languages. We conduct controlled experiments using various parallel datasets (FLORES-200 and Bible), models (Llama family, Gemma family, Mistral, and OLMo), and established downstream tasks (Belebele, m-MMLU, and m-ARC). We explore different methods to compute embeddings in decoder-only models. Our results show that MEXA, in its default settings, achieves an average Pearson correlation of 0.90 between its predicted scores and actual task performance across languages. This suggests that MEXA is a reliable method for estimating the multilingual capabilities of English-centric LLMs, providing a clearer understanding of their multilingual potential and the inner workings of LLMs. Leaderboard: https://cis-lmu-mexa.hf.space, Code: https://github.com/cisnlp/MEXA.
Problem

Research questions and friction points this paper is trying to address.

Evaluating multilingual performance of English-centric LLMs
Assessing cross-lingual alignment using parallel sentences
Estimating model capabilities across diverse languages
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses parallel sentences for multilingual evaluation
Leverages English as pivot in LLM layers
Computes alignment for transfer estimation
🔎 Similar Papers
No similar papers found.