π€ AI Summary
Effectively leveraging unstructured, unpaired multilingual knowledge remains challenging in machine translation (MT). Method: This paper proposes a retrieval-augmented MT paradigm, introducing RAGtransβthe first dedicated benchmark for this task (79K samples covering multilingual unstructured documents)βand a human-annotation-free multi-task training framework enabling large language models to dynamically retrieve and integrate cross-lingual unstructured textual knowledge. Our approach synergistically combines retrieval-augmented generation (RAG), multi-task pretraining, and GPT-4o-assisted synthetic data construction. Contribution/Results: Rigorously evaluated via human assessment and automatic metrics (BLEU/COMET), our method consistently improves large-model MT performance across multiple benchmarks, yielding gains of +1.58β3.09 BLEU and +1.00β2.03 COMET. It significantly enhances terminology accuracy, cultural-phrase handling, and robustness for low-resource languages.
π Abstract
Retrieval-augmented generation (RAG) introduces additional information to enhance large language models (LLMs). In machine translation (MT), previous work typically retrieves in-context examples from paired MT corpora, or domain-specific knowledge from knowledge graphs, to enhance models' MT ability. However, a large amount of world knowledge is organized in unstructured documents, and might not be fully paired across different languages. In this paper, we study retrieval-augmented MT using unstructured documents. Specifically, we build RAGtrans, the first benchmark to train and evaluate LLMs' retrieval-augmented MT ability. RAGtrans contains 79K MT samples collected via GPT-4o and human translators. Besides, documents from different languages are also provided to supply the knowledge to these samples. Based on RAGtrans, we further propose a multi-task training method to teach LLMs how to use information from multilingual documents during their translation. The method uses existing multilingual corpora to create auxiliary training objectives without additional labeling requirements. Extensive experiments show that the method improves LLMs by 1.58-3.09 BLEU and 1.00-2.03 COMET scores.