Retrieval-Augmented Machine Translation with Unstructured Knowledge

πŸ“… 2024-12-05
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 5
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Effectively leveraging unstructured, unpaired multilingual knowledge remains challenging in machine translation (MT). Method: This paper proposes a retrieval-augmented MT paradigm, introducing RAGtransβ€”the first dedicated benchmark for this task (79K samples covering multilingual unstructured documents)β€”and a human-annotation-free multi-task training framework enabling large language models to dynamically retrieve and integrate cross-lingual unstructured textual knowledge. Our approach synergistically combines retrieval-augmented generation (RAG), multi-task pretraining, and GPT-4o-assisted synthetic data construction. Contribution/Results: Rigorously evaluated via human assessment and automatic metrics (BLEU/COMET), our method consistently improves large-model MT performance across multiple benchmarks, yielding gains of +1.58–3.09 BLEU and +1.00–2.03 COMET. It significantly enhances terminology accuracy, cultural-phrase handling, and robustness for low-resource languages.

Technology Category

Application Category

πŸ“ Abstract
Retrieval-augmented generation (RAG) introduces additional information to enhance large language models (LLMs). In machine translation (MT), previous work typically retrieves in-context examples from paired MT corpora, or domain-specific knowledge from knowledge graphs, to enhance models' MT ability. However, a large amount of world knowledge is organized in unstructured documents, and might not be fully paired across different languages. In this paper, we study retrieval-augmented MT using unstructured documents. Specifically, we build RAGtrans, the first benchmark to train and evaluate LLMs' retrieval-augmented MT ability. RAGtrans contains 79K MT samples collected via GPT-4o and human translators. Besides, documents from different languages are also provided to supply the knowledge to these samples. Based on RAGtrans, we further propose a multi-task training method to teach LLMs how to use information from multilingual documents during their translation. The method uses existing multilingual corpora to create auxiliary training objectives without additional labeling requirements. Extensive experiments show that the method improves LLMs by 1.58-3.09 BLEU and 1.00-2.03 COMET scores.
Problem

Research questions and friction points this paper is trying to address.

Enhancing machine translation with unstructured knowledge retrieval
Addressing cross-language knowledge gaps in translation models
Improving LLM translation accuracy using multilingual documents
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses unstructured documents for retrieval augmentation
Multi-task training with auxiliary objectives
Creates benchmark RAGtrans for evaluation
πŸ”Ž Similar Papers
No similar papers found.
Jiaan Wang
Jiaan Wang
WeChat AI, Tencent
Natural Language ProcessingMachine TranslationInformation Systems
Fandong Meng
Fandong Meng
WeChat AI, Tencent
Machine TranslationNatural Language Processing
Y
Yingxue Zhang
Pattern Recognition Center, WeChat AI, Tencent Inc
J
Jie Zhou
Pattern Recognition Center, WeChat AI, Tencent Inc