SimulPL: Aligning Human Preferences in Simultaneous Machine Translation

📅 2025-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing simultaneous machine translation (SiMT) methods struggle to jointly satisfy users’ diverse preferences regarding translation quality, coherence, key-information coverage, conciseness, and latency. This paper proposes the first end-to-end preference-aligned SiMT framework, which systematically defines five human preference dimensions for SiMT—quality, coherence, informativeness, conciseness, and latency—and innovatively formalizes latency preference as an explicit optimization objective, enabling joint learning of read/write policies and translation generation. Leveraging GPT-4/4o, we construct a multidimensional preference-annotated dataset and integrate preference learning with policy optimization. Experiments on Zh→En, De→En, and En→Zh tasks demonstrate that our method significantly improves human preference alignment across all latency levels, consistently outperforming state-of-the-art baselines.

Technology Category

Application Category

📝 Abstract
Simultaneous Machine Translation (SiMT) generates translations while receiving streaming source inputs. This requires the SiMT model to learn a read/write policy, deciding when to translate and when to wait for more source input. Numerous linguistic studies indicate that audiences in SiMT scenarios have distinct preferences, such as accurate translations, simpler syntax, and no unnecessary latency. Aligning SiMT models with these human preferences is crucial to improve their performances. However, this issue still remains unexplored. Additionally, preference optimization for SiMT task is also challenging. Existing methods focus solely on optimizing the generated responses, ignoring human preferences related to latency and the optimization of read/write policy during the preference optimization phase. To address these challenges, we propose Simultaneous Preference Learning (SimulPL), a preference learning framework tailored for the SiMT task. In the SimulPL framework, we categorize SiMT human preferences into five aspects: extbf{translation quality preference}, extbf{monotonicity preference}, extbf{key point preference}, extbf{simplicity preference}, and extbf{latency preference}. By leveraging the first four preferences, we construct human preference prompts to efficiently guide GPT-4/4o in generating preference data for the SiMT task. In the preference optimization phase, SimulPL integrates extbf{latency preference} into the optimization objective and enables SiMT models to improve the read/write policy, thereby aligning with human preferences more effectively. Experimental results indicate that SimulPL exhibits better alignment with human preferences across all latency levels in Zh$ ightarrow$En, De$ ightarrow$En and En$ ightarrow$Zh SiMT tasks. Our data and code will be available at url{https://github.com/EurekaForNLP/SimulPL}.
Problem

Research questions and friction points this paper is trying to address.

Synchronous Machine Translation
User Preference
Translation Quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

SimulPL
Synchronous Machine Translation
Preference Learning
🔎 Similar Papers
No similar papers found.
Donglei Yu
Donglei Yu
Institute of Automation, Chinese Academy of Sciences
simultaneous machine translationlarge language model
Y
Yang Zhao
School of Artificial Intelligence, University of Chinese Academy of Sciences; State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
J
Jie Zhu
Graduate School of Translation and Interpretation, Beijing Foreign Studies University
Yangyifan Xu
Yangyifan Xu
Institute of Automation Chinese Academy of Sciences
Y
Yu Zhou
School of Artificial Intelligence, University of Chinese Academy of Sciences; State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
C
Chengqing Zong
School of Artificial Intelligence, University of Chinese Academy of Sciences; State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China