🤖 AI Summary
Existing evaluation methodologies are predominantly confined to single-task or single-language settings, failing to holistically assess large language models’ multilingual and multitask capabilities. To address this gap, we introduce P-MMEval—the first large-scale, parallel multilingual multitask evaluation benchmark—covering foundational NLP tasks and capability-specific assessments to enable comparable cross-lingual, cross-task, and cross-model evaluation. Our method innovatively establishes a parallel multilingual unified evaluation framework, built upon the Hugging Face standardized pipeline and featuring a uniform prompt template, parallel corpus alignment, and multidimensional normalization for scoring. We conduct systematic evaluations on leading multilingual LLMs, quantitatively revealing—for the first time—the English-to-non-English knowledge transfer pattern, and jointly analyzing how task difficulty, language resource availability, model scale, and prompt design collectively influence multilingual performance.
📝 Abstract
Recent advancements in large language models (LLMs) showcase varied multilingual capabilities across tasks like translation, code generation, and reasoning. Previous assessments often limited their scope to fundamental natural language processing (NLP) or isolated capability-specific tasks. To alleviate this drawback, we aim to present a comprehensive multilingual multitask benchmark. First, we introduce P-MMEval, a large-scale benchmark covering effective fundamental and capability-specialized datasets. Furthermore, P-MMEval delivers consistent language coverage across various datasets and provides parallel samples. Finally, we conduct extensive experiments on representative multilingual model series to compare performances across models and tasks, explore the relationship between multilingual performances and factors such as tasks, model sizes, languages, and prompts, and examine the effectiveness of knowledge transfer from English to other languages. The resulting insights are intended to offer valuable guidance for future research. The dataset is available at https://huggingface.co/datasets/Qwen/P-MMEval.