🤖 AI Summary
To address critical challenges in massively multilingual machine translation (MMT)—including imbalanced language coverage, poor performance on low-resource languages, and English-centric bias—this paper introduces LMT, a large-scale MMT model family centered on Chinese–English and supporting 60 languages across 234 translation directions. We first identify and characterize the “directional degradation” phenomenon in multilingual training, then propose strategic downsampling and Parallel Multilingual Prompting (PMP) to significantly enhance cross-lingual transfer. Furthermore, we design fine-grained adaptation strategies for efficient multilingual fine-tuning. Experiments demonstrate that LMT achieves state-of-the-art (SOTA) performance under comparable language coverage: its 4B-parameter variant comprehensively outperforms Aya-101-13B and NLLB-54B across diverse benchmarks. The LMT models are fully open-sourced and support flexible deployment across multiple parameter scales.
📝 Abstract
Large language models have significantly advanced Multilingual Machine Translation (MMT), yet the broad language coverage, consistent translation quality, and English-centric bias remain open challenges. To address these challenges, we introduce extbf{LMT}, a suite of extbf{L}arge-scale extbf{M}ultilingual extbf{T}ranslation models centered on both Chinese and English, covering 60 languages and 234 translation directions. During development, we identify a previously overlooked phenomenon of extbf{directional degeneration}, where symmetric multi-way fine-tuning data overemphasize reverse directions (X $ o$ En/Zh), leading to excessive many-to-one mappings and degraded translation quality. We propose extbf{Strategic Downsampling}, a simple yet effective method to mitigate this degeneration. In addition, we design extbf{Parallel Multilingual Prompting (PMP)}, which leverages typologically related auxiliary languages to enhance cross-lingual transfer. Through rigorous data curation and refined adaptation strategies, LMT achieves SOTA performance among models of comparable language coverage, with our 4B model (LMT-60-4B) surpassing the much larger Aya-101-13B and NLLB-54B models by a substantial margin. We release LMT in four sizes (0.6B/1.7B/4B/8B) to catalyze future research and provide strong baselines for inclusive, scalable, and high-quality MMT footnote{href{https://github.com/NiuTrans/LMT}{https://github.com/NiuTrans/LMT}}.