WMT24++: Expanding the Language Coverage of WMT24 to 55 Languages&Dialects

📅 2025-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing machine translation (MT) benchmarks suffer from insufficient language coverage, limiting comprehensive evaluation of multilingual capabilities. Method: This work systematically extends the WMT24 test suite to 55 languages and dialects—adding 46 previously unrepresented varieties—spanning literary, news, social media, and spoken domains. For the first time, human reference translations and professional post-edits are provided for all 55 languages; additionally, post-edits are supplemented for eight of the nine original WMT24 languages. The methodology integrates multi-source text collection, rigorous human quality assurance, and automated evaluation using BLEU and COMET. Results: Large language models (LLMs) significantly outperform dedicated MT systems across all 55 languages in automatic metrics. This benchmark constitutes the broadest, most domain-diverse, and finest-grained multilingual MT evaluation resource to date, establishing a new standard for assessing LLMs’ multilingual translation competence.

Technology Category

Application Category

📝 Abstract
As large language models (LLM) become more and more capable in languages other than English, it is important to collect benchmark datasets in order to evaluate their multilingual performance, including on tasks like machine translation (MT). In this work, we extend the WMT24 dataset to cover 55 languages by collecting new human-written references and post-edits for 46 new languages and dialects in addition to post-edits of the references in 8 out of 9 languages in the original WMT24 dataset. The dataset covers four domains: literary, news, social, and speech. We benchmark a variety of MT providers and LLMs on the collected dataset using automatic metrics and find that LLMs are the best-performing MT systems in all 55 languages. These results should be confirmed using a human-based evaluation, which we leave for future work.
Problem

Research questions and friction points this paper is trying to address.

Expanding WMT24 dataset to 55 languages
Evaluating multilingual performance of LLMs
Benchmarking MT systems across four domains
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extends WMT24 dataset
Covers 55 languages
Benchmarks LLMs performance
🔎 Similar Papers
No similar papers found.