Evaluating Language Translation Models by Playing Telephone

📅 2025-09-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current machine translation (MT) evaluation lags behind model advancements, particularly hindering optimization for complex tasks such as long-document and literary translation. To address this, we propose an unsupervised evaluation data generation method that leverages multi-model round-robin translation combined with source–target language cycle translation—termed the “telephone game”—to automatically construct high-quality, cross-domain, multi-length test samples without reliance on human reference translations. Integrated with state-of-the-art evaluation systems (e.g., xCOMET), our approach significantly outperforms baselines in both translation quality scoring and candidate selection, improving assessment accuracy and generalizability. Our key contribution is the first integration of cycle translation with model diversity to establish a scalable, low-cost, and highly adaptable unsupervised MT evaluation paradigm.

Technology Category

Application Category

📝 Abstract
Our ability to efficiently and accurately evaluate the quality of machine translation systems has been outrun by the effectiveness of current language models--which limits the potential for further improving these models on more challenging tasks like long-form and literary translation. We propose an unsupervised method to generate training data for translation evaluation over different document lengths and application domains by repeated rounds of translation between source and target languages. We evaluate evaluation systems trained on texts mechanically generated using both model rotation and language translation approaches, demonstrating improved performance over a popular translation evaluation system (xCOMET) on two different tasks: (i) scoring the quality of a given translation against a human reference and (ii) selecting which of two translations is generationally closer to an original source document.
Problem

Research questions and friction points this paper is trying to address.

Evaluating machine translation quality efficiently and accurately
Generating training data for translation evaluation across domains
Improving performance on scoring and selection tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unsupervised method generates translation evaluation data
Repeated translation rounds between languages create training texts
Outperforms xCOMET on quality scoring and selection tasks
🔎 Similar Papers
No similar papers found.