π€ AI Summary
This work presents the first systematic investigation of test-time scaling (TTS) in machine translation (MT), addressing the lack of empirical validation of TTS for MT. Adopting a best-of-N sampling framework, the authors conduct comprehensive evaluations across diverse language pairs, model scales, and computational budgets on the WMT24 multilingual benchmark, leveraging both neural metrics and human evaluation. Results show that TTS substantially improves translation quality for high-resource languages; small models with large N can outperform single forward passes of larger models, yet larger models achieve higher efficiency under fixed compute budgets; and for low-resource languages, metricβhuman misalignment leads to apparent performance degradation. This study fills a critical gap in understanding TTS for MT, clarifying its efficacy, limitations, and practical trade-offs, and offers novel insights for resource-efficient MT deployment.
π Abstract
Scaling model parameters has become the de facto strategy for improving NLP systems, but it comes with substantial computational costs. Test-Time Scaling (TTS) offers an alternative by allocating more computation at inference: generating multiple candidates and selecting the best. While effective in tasks such as mathematical reasoning, TTS has not been systematically explored for machine translation (MT). In this paper, we present the first systematic study of TTS for MT, investigating a simple but practical best-of-N framework on WMT24 benchmarks. Our experiments cover six high-resource and one low-resource language pairs, five model sizes (3B-72B), and various TTS compute budget (N up to 1024). Our results show that a) For high-resource languages, TTS generally improves translation quality according to multiple neural MT evaluation metrics, and our human evaluation confirms these gains; b) Augmenting smaller models with large $N$ can match or surpass larger models at $N{=}1$ with more compute cost; c) Under fixed compute budgets, larger models are typically more efficient, and TTS can degrade quality due to metric blind spots in low-resource cases.