Investigating Test-Time Scaling with Reranking for Machine Translation

πŸ“… 2025-09-23
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work presents the first systematic investigation of test-time scaling (TTS) in machine translation (MT), addressing the lack of empirical validation of TTS for MT. Adopting a best-of-N sampling framework, the authors conduct comprehensive evaluations across diverse language pairs, model scales, and computational budgets on the WMT24 multilingual benchmark, leveraging both neural metrics and human evaluation. Results show that TTS substantially improves translation quality for high-resource languages; small models with large N can outperform single forward passes of larger models, yet larger models achieve higher efficiency under fixed compute budgets; and for low-resource languages, metric–human misalignment leads to apparent performance degradation. This study fills a critical gap in understanding TTS for MT, clarifying its efficacy, limitations, and practical trade-offs, and offers novel insights for resource-efficient MT deployment.

Technology Category

Application Category

πŸ“ Abstract
Scaling model parameters has become the de facto strategy for improving NLP systems, but it comes with substantial computational costs. Test-Time Scaling (TTS) offers an alternative by allocating more computation at inference: generating multiple candidates and selecting the best. While effective in tasks such as mathematical reasoning, TTS has not been systematically explored for machine translation (MT). In this paper, we present the first systematic study of TTS for MT, investigating a simple but practical best-of-N framework on WMT24 benchmarks. Our experiments cover six high-resource and one low-resource language pairs, five model sizes (3B-72B), and various TTS compute budget (N up to 1024). Our results show that a) For high-resource languages, TTS generally improves translation quality according to multiple neural MT evaluation metrics, and our human evaluation confirms these gains; b) Augmenting smaller models with large $N$ can match or surpass larger models at $N{=}1$ with more compute cost; c) Under fixed compute budgets, larger models are typically more efficient, and TTS can degrade quality due to metric blind spots in low-resource cases.
Problem

Research questions and friction points this paper is trying to address.

Systematically exploring Test-Time Scaling for machine translation effectiveness
Investigating best-of-N candidate reranking across diverse language pairs and models
Evaluating translation quality improvements versus computational cost trade-offs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Systematically investigates Test-Time Scaling
Implements best-of-N reranking framework
Evaluates across multiple language pairs
πŸ”Ž Similar Papers
No similar papers found.
Shaomu Tan
Shaomu Tan
PHD Candidate at University of Amsterdam
Machine TranslationMultilingual NLPLLMs
R
Ryosuke Mitani
Sony Group Corporation
R
Ritvik Choudhary
Sony Group Corporation
T
Toshiyuki Sekiya
Sony Group Corporation