DelTA: An Online Document-Level Translation Agent Based on Multi-Level Memory

📅 2024-10-10
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address insufficient consistency and accuracy in document-level machine translation, this paper proposes DelTA, an intelligent agent designed for online document translation. Its core is a novel multi-granularity dynamic memory architecture that integrates a proper noun lexicon, bilingual summaries, and short- and long-term memory modules, enabling large language models (LLMs) to perform incremental sentence-level translation that is consistent, accurate, and complete. DelTA is the first framework to unify translation agents with query-driven summarization capabilities, featuring an LLM-guided memory retrieval and update module alongside a hierarchical memory storage mechanism. Evaluated on two benchmark datasets, DelTA achieves average improvements of +4.58 points in consistency score and +3.16 points in COMET score, significantly enhancing pronoun resolution and context-dependent translation. The code and datasets are publicly available.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have achieved reasonable quality improvements in machine translation (MT). However, most current research on MT-LLMs still faces significant challenges in maintaining translation consistency and accuracy when processing entire documents. In this paper, we introduce DelTA, a Document-levEL Translation Agent designed to overcome these limitations. DelTA features a multi-level memory structure that stores information across various granularities and spans, including Proper Noun Records, Bilingual Summary, Long-Term Memory, and Short-Term Memory, which are continuously retrieved and updated by auxiliary LLM-based components. Experimental results indicate that DelTA significantly outperforms strong baselines in terms of translation consistency and quality across four open/closed-source LLMs and two representative document translation datasets, achieving an increase in consistency scores by up to 4.58 percentage points and in COMET scores by up to 3.16 points on average. DelTA employs a sentence-by-sentence translation strategy, ensuring no sentence omissions and offering a memory-efficient solution compared to the mainstream method. Furthermore, DelTA improves pronoun and context-dependent translation accuracy, and the summary component of the agent also shows promise as a tool for query-based summarization tasks. The code and data of our approach are released at https://github.com/YutongWang1216/DocMTAgent.
Problem

Research questions and friction points this paper is trying to address.

Improves document-level translation consistency and accuracy
Addresses challenges in maintaining translation quality across entire documents
Enhances pronoun and context-dependent translation accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-level memory structure for translation consistency
Sentence-by-sentence strategy prevents omissions
Auxiliary LLM components enhance context accuracy
🔎 Similar Papers
No similar papers found.
Y
Yutong Wang
Institute of Computing and Intelligence, Harbin Institute of Technology, Shenzhen, China
Jiali Zeng
Jiali Zeng
Tencent
Natural Language ProcessingDeep LearningNeural Machine Translation
X
Xuebo Liu
Institute of Computing and Intelligence, Harbin Institute of Technology, Shenzhen, China
Derek F. Wong
Derek F. Wong
Professor, Department of Computer and Information Science, University of Macau
Machine TranslationNeural Machine TranslationNatural Language ProcessingMachine Learning
Fandong Meng
Fandong Meng
WeChat AI, Tencent
Machine TranslationNatural Language Processing
J
Jie Zhou
Pattern Recognition Center, WeChat AI, Tencent Inc, China
M
Min Zhang
Institute of Computing and Intelligence, Harbin Institute of Technology, Shenzhen, China