🤖 AI Summary
This study investigates ChatGPT’s capability in identifying and classifying machine translation (MT) errors in professional (LSP) texts, specifically evaluating outputs from DeepL and itself. Method: It introduces, for the first time in LLM evaluation, a fine-grained LSP-oriented error typology and employs two zero-shot prompting strategies—basic and elaborated—against human expert annotations as the gold standard. Contribution/Results: ChatGPT achieves high recall and precision in annotating DeepL translations, with the elaborated prompt significantly improving classification accuracy. However, its self-assessment of its own translations is severely inaccurate, exposing critical limitations in auto-evaluation. The findings empirically delineate the potential and boundaries of LLMs in cross-system MT evaluation, offering methodological insights and empirical support for open-source LLM–driven automated quality assessment, translation pedagogy, and teacher assessment enhancement.
📝 Abstract
This study investigates the capabilities of large language models (LLMs), specifically ChatGPT, in annotating MT outputs based on an error typology. In contrast to previous work focusing mainly on general language, we explore ChatGPT's ability to identify and categorise errors in specialised translations. By testing two different prompts and based on a customised error typology, we compare ChatGPT annotations with human expert evaluations of translations produced by DeepL and ChatGPT itself. The results show that, for translations generated by DeepL, recall and precision are quite high. However, the degree of accuracy in error categorisation depends on the prompt's specific features and its level of detail, ChatGPT performing very well with a detailed prompt. When evaluating its own translations, ChatGPT achieves significantly poorer results, revealing limitations with self-assessment. These results highlight both the potential and the limitations of LLMs for translation evaluation, particularly in specialised domains. Our experiments pave the way for future research on open-source LLMs, which could produce annotations of comparable or even higher quality. In the future, we also aim to test the practical effectiveness of this automated evaluation in the context of translation training, particularly by optimising the process of human evaluation by teachers and by exploring the impact of annotations by LLMs on students' post-editing and translation learning.