Did Translation Models Get More Robust Without Anyone Even Noticing?

📅 2024-03-06
🏛️ Annual Meeting of the Association for Computational Linguistics
📈 Citations: 6
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the implicit robustness of modern multilingual machine translation models and LLM-based translators to spelling errors, abbreviations, and formatting noise. To this end, we conduct controlled-variable experiments, systematically inject diverse noise—including synthetically generated and real-world Twitter data—and comparatively analyze source-side correction techniques. Our findings are fourfold: (1) Without explicit robustness training, large-scale models exhibit 30–50% smaller BLEU degradation under multiple noise types compared to earlier models; (2) We provide the first empirical evidence that this robustness stems implicitly from pretraining and architectural evolution; (3) LLM-based translators demonstrate superior resilience on social-media text; (4) We propose a framework for analyzing the applicability of source-side correction, precisely delineating the effectiveness boundaries of correction strategies across distinct noise categories. Collectively, these results advance understanding of implicit robustness mechanisms in contemporary translation systems and inform practical mitigation strategies for noisy inputs.

Technology Category

Application Category

📝 Abstract
Neural machine translation (MT) models achieve strong results across a variety of settings, but it is widely believed that they are highly sensitive to"noisy"inputs, such as spelling errors, abbreviations, and other formatting issues. In this paper, we revisit this insight in light of recent multilingual MT models and large language models (LLMs) applied to machine translation. Somewhat surprisingly, we show through controlled experiments that these models are far more robust to many kinds of noise than previous models, even when they perform similarly on clean data. This is notable because, even though LLMs have more parameters and more complex training processes than past models, none of the open ones we consider use any techniques specifically designed to encourage robustness. Next, we show that similar trends hold for social media translation experiments -- LLMs are more robust to social media text. We include an analysis of the circumstances in which source correction techniques can be used to mitigate the effects of noise. Altogether, we show that robustness to many types of noise has increased.
Problem

Research questions and friction points this paper is trying to address.

Assessing robustness of modern MT models to noisy inputs
Evaluating LLM performance on social media translation tasks
Analyzing source correction techniques for noise mitigation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modern MT models show unexpected noise robustness
LLMs enhance robustness without specific training techniques
Source correction techniques mitigate noise effects effectively
🔎 Similar Papers
B
Ben Peters
Instituto de Telecomunicações, Lisbon, Portugal
A
André F. T. Martins
Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal