Languages Still Left Behind: Toward a Better Multilingual Machine Translation Benchmark

📅 2025-08-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies three critical flaws in mainstream multilingual machine translation benchmarks such as FLORES+: (1) reported translation quality does not meet claimed standards; (2) source texts are narrowly scoped and culturally biased toward English-speaking contexts; and (3) evaluation protocols are vulnerable to heuristic “cheating” (e.g., named entity copying). To address these issues, we introduce a new evaluation suite—domain-general, culturally neutral, and minimally reliant on named entities—and propose a systematic diagnostic framework integrating human evaluation, cross-lingual comparative analysis, and natural corpus testing. Empirical results demonstrate that models trained on high-quality natural data underperform on FLORES+ yet achieve substantial gains on our benchmark, revealing a severe misalignment between existing benchmarks and real-world translation challenges. This study provides both theoretical grounding and practical methodology for reconstructing multilingual MT evaluation paradigms.

Technology Category

Application Category

📝 Abstract
Multilingual machine translation (MT) benchmarks play a central role in evaluating the capabilities of modern MT systems. Among them, the FLORES+ benchmark is widely used, offering English-to-many translation data for over 200 languages, curated with strict quality control protocols. However, we study data in four languages (Asante Twi, Japanese, Jinghpaw, and South Azerbaijani) and uncover critical shortcomings in the benchmark's suitability for truly multilingual evaluation. Human assessments reveal that many translations fall below the claimed 90% quality standard, and the annotators report that source sentences are often too domain-specific and culturally biased toward the English-speaking world. We further demonstrate that simple heuristics, such as copying named entities, can yield non-trivial BLEU scores, suggesting vulnerabilities in the evaluation protocol. Notably, we show that MT models trained on high-quality, naturalistic data perform poorly on FLORES+ while achieving significant gains on our domain-relevant evaluation set. Based on these findings, we advocate for multilingual MT benchmarks that use domain-general and culturally neutral source texts rely less on named entities, in order to better reflect real-world translation challenges.
Problem

Research questions and friction points this paper is trying to address.

Evaluating FLORES+ benchmark's multilingual translation quality shortcomings
Identifying cultural bias and domain-specific issues in source texts
Proposing improved benchmarks for realistic multilingual evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Domain-general culturally neutral source texts
Reduced reliance on named entities
Real-world translation challenge evaluation
🔎 Similar Papers
No similar papers found.