🤖 AI Summary
This study addresses the significant structural and usage differences between Chinese and English passive constructions, for which existing machine translation systems lack dedicated evaluation resources. The authors present the first large-scale, multi-domain, bidirectional parallel corpus of Chinese–English passive sentences, augmented with automatic syntactic annotation and manual validation to yield a high-quality test set. Evaluation using this benchmark reveals that mainstream models tend to preserve the source-language passive voice rather than adapt to target-language conventions; commercial neural machine translation systems outperform large language models on automatic metrics, while the latter produce more diverse translations; and voice consistency is markedly higher in English-to-Chinese than in Chinese-to-English translation. This work establishes the first structured evaluation benchmark for passive voice translation.
📝 Abstract
Machine Translation (MT) evaluation has gone beyond metrics, towards more specific linguistic phenomena. Regarding English-Chinese language pairs, passive sentences are constructed and distributed differently due to language variation, thus need special attention in MT. This paper proposes a bidirectional multi-domain dataset of passive sentences, extracted from five Chinese-English parallel corpora and annotated automatically with structure labels according to human translation, and a test set with manually verified annotation. The dataset consists of 73,965 parallel sentence pairs (2,358,731 English words, 3,498,229 Chinese characters). We evaluate two state-of-the-art open-source MT systems with our dataset, and four commercial models with the test set. The results show that, unlike humans, models are more influenced by the voice of the source text rather than the general voice usage of the source language, and therefore tend to maintain the passive voice when translating a passive in either direction. However, models demonstrate some knowledge of the low frequency and predominantly negative context of Chinese passives, leading to higher voice consistency with human translators in English-to-Chinese translation than in Chinese-to-English translation. Commercial NMT models scored higher in metric evaluations, but LLMs showed a better ability to use diverse alternative translations. Datasets and annotation script will be shared upon request.