Align-then-Slide: A complete evaluation framework for Ultra-Long Document-Level Machine Translation

📅 2025-09-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing evaluation methods rely on the sentence-level alignment assumption, rendering them inadequate for assessing non-aligned, document-level machine translation (doc-MT) outputs generated by large language models (LLMs). To address this, we propose Align-then-Slide—the first automated, two-stage evaluation framework tailored for ultra-long-document MT. It first resolves structural irregularities via automatic source–target sentence alignment, then applies an *n*-chunk sliding window to enable multi-granularity quality scoring. Evaluated on the WMT benchmark, our framework achieves a Pearson correlation of 0.929 with expert MQM rankings—the highest reported to date. Moreover, the high-quality preference data it generates significantly improves reinforcement learning (RL) training, outperforming supervised fine-tuning (SFT) baselines. Our core contribution lies in breaking free from rigid sentence-alignment constraints, enabling precise, scalable, and learnable doc-MT evaluation.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have ushered in a new era for document-level machine translation ( extit{doc}-mt), yet their whole-document outputs challenge existing evaluation methods that assume sentence-by-sentence alignment. We introduce extit{ extbf{Align-then-Slide}}, a complete evaluation framework for ultra-long doc-mt. In the Align stage, we automatically infer sentence-level source-target correspondences and rebuild the target to match the source sentence number, resolving omissions and many-to-one/one-to-many mappings. In the n-Chunk Sliding Evaluate stage, we calculate averaged metric scores under 1-, 2-, 3- and 4-chunk for multi-granularity assessment. Experiments on the WMT benchmark show a Pearson correlation of 0.929 between our method with expert MQM rankings. On a newly curated real-world test set, our method again aligns closely with human judgments. Furthermore, preference data produced by Align-then-Slide enables effective CPO training and its direct use as a reward model for GRPO, both yielding translations preferred over a vanilla SFT baseline. The results validate our framework as an accurate, robust, and actionable evaluation tool for doc-mt systems.
Problem

Research questions and friction points this paper is trying to address.

Evaluating ultra-long document-level machine translation outputs
Resolving sentence alignment issues in translation evaluation
Providing multi-granularity assessment for document translation quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Aligns source-target sentences automatically
Uses multi-chunk sliding for granular assessment
Enables effective CPO training and GRPO rewards
🔎 Similar Papers
No similar papers found.
J
Jiaxin Guo
Huawei Translation Services Center, Beijing, China
D
Daimeng Wei
Huawei Translation Services Center, Beijing, China
Yuanchang Luo
Yuanchang Luo
2012@Huawei
X
Xiaoyu Chen
Huawei Translation Services Center, Beijing, China
Zhanglin Wu
Zhanglin Wu
2012 Lab, Huawei Co. LTD
Machine TranslationNatural Language Processing
H
Huan Yang
Huawei Translation Services Center, Beijing, China
H
Hengchao Shang
Huawei Translation Services Center, Beijing, China
Zongyao Li
Zongyao Li
Huawei Translation Services Center, Beijing, China
Zhiqiang Rao
Zhiqiang Rao
Huawei
NLP
J
Jinlong Yang
Huawei Translation Services Center, Beijing, China
H
Hao Yang
Huawei Translation Services Center, Beijing, China