🤖 AI Summary
Existing machine translation (MT) benchmarks rapidly become obsolete due to overly simplistic test samples, failing to discriminate model capabilities or expose weaknesses; conventional hard-example construction—via sub-sampling or synthetic generation—compromises naturalness or linguistic diversity. This paper introduces MT-Breaker, an iterative optimization framework that jointly leverages large language models (LLMs) and target MT systems. Through prompt engineering and translation feedback, it dynamically rewrites source texts while preserving semantic fidelity and linguistic naturalness, thereby substantially increasing translation difficulty. MT-Breaker enables model-specific hard-example generation and exhibits cross-model and cross-lingual transferability of difficulty. Experiments demonstrate that the generated test cases effectively uncover systematic deficiencies in state-of-the-art MT systems, providing high-quality, challenging data for robustness evaluation and continuous model improvement.
📝 Abstract
Machine translation benchmarks sourced from the real world are quickly obsoleted, due to most examples being easy for state-of-the-art translation models. This limits the benchmark's ability to distinguish which model is better or to reveal models' weaknesses. Current methods for creating difficult test cases, such as subsampling or from-scratch synthesis, either fall short of identifying difficult examples or suffer from a lack of diversity and naturalness. Inspired by the iterative process of human experts probing for model failures, we propose MT-breaker, a method where a large language model iteratively refines a source text to increase its translation difficulty. The LLM iteratively queries a target machine translation model to guide its generation of difficult examples. Our approach generates examples that are more challenging for the target MT model while preserving the diversity of natural texts. While the examples are tailored to a particular machine translation model during the generation, the difficulty also transfers to other models and languages.