🤖 AI Summary
Professional-domain machine translation often neglects communicative goals and client requirements—i.e., translation norms—leading to outputs misaligned with real-world practice. Method: This study pioneers the systematic integration of translation norm theory into machine translation, proposing a norm-explicit LLM-based translation framework. It leverages prompt engineering to elicit multi-style translations from large language models and employs a multidimensional evaluation combining expert error analysis, user preference ranking, and automated metrics. Results: Evaluated on investor relations texts from 33 listed companies, the method consistently outperforms official human translations in human evaluation. Its core contributions are: (1) establishing a norm-driven translation paradigm; (2) empirically validating that norm-guided MT can surpass professional human translation; and (3) providing an interpretable, controllable pathway for business-oriented machine translation.
📝 Abstract
In professional settings, translation is guided by communicative goals and client needs, often formalized as specifications. While existing evaluation frameworks acknowledge the importance of such specifications, these specifications are often treated only implicitly in machine translation (MT) research. Drawing on translation studies, we provide a theoretical rationale for why specifications matter in professional translation, as well as a practical guide to implementing specification-aware MT and evaluation. Building on this foundation, we apply our framework to the translation of investor relations texts from 33 publicly listed companies. In our experiment, we compare five translation types, including official human translations and prompt-based outputs from large language models (LLMs), using expert error analysis, user preference rankings, and an automatic metric. The results show that LLM translations guided by specifications consistently outperformed official human translations in human evaluations, highlighting a gap between perceived and expected quality. These findings demonstrate that integrating specifications into MT workflows, with human oversight, can improve translation quality in ways aligned with professional practice.