Same evaluation, more tokens: On the effect of input length for machine translation evaluation using Large Language Models

📅 2025-05-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper identifies a long-overlooked text-length bias in machine translation evaluation: as input length increases, large language models (LLMs) exhibit reduced error labeling—yet system-level ranking accuracy deteriorates significantly (Kendall’s τ drops markedly), violating evaluation consistency. It is the first systematic study to empirically establish the length dependence of LLMs in assessing long-document translation quality. To mitigate this bias, the authors propose Focus Sentence Prompting (FSP) and MQM-aligned supervised fine-tuning, leveraging fine-grained sentence-level prompting and task-specific adaptation grounded in the Multidimensional Quality Metrics (MQM) framework. Experiments demonstrate that the method improves error recall for long-document evaluation by 32%, restores system-level ranking correlation to short-sentence levels (τ > 0.85), and substantially enhances the robustness and fairness of LLM-based MT evaluation.

Technology Category

Application Category

📝 Abstract
Accurately evaluating machine-translated text remains a long-standing challenge, particularly for long documents. Recent work has shown that large language models (LLMs) can serve as reliable and interpretable sentence-level translation evaluators via MQM error span annotations. With modern LLMs supporting larger context windows, a natural question arises: can we feed entire document translations into an LLM for quality assessment? Ideally, evaluation should be invariant to text length, producing consistent error spans regardless of input granularity. However, our analysis shows that text length significantly impacts evaluation: longer texts lead to fewer error spans and reduced system ranking accuracy. To address this limitation, we evaluate several strategies, including granularity-aligned prompting, Focus Sentence Prompting (FSP), and a fine-tuning approach to better align LLMs with the evaluation task. The latter two methods largely mitigate this length bias, making LLMs more reliable for long-form translation evaluation.
Problem

Research questions and friction points this paper is trying to address.

Impact of text length on machine translation evaluation accuracy
LLM evaluation inconsistency with varying input granularity
Strategies to mitigate length bias in translation assessment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses granularity-aligned prompting for evaluation
Implements Focus Sentence Prompting (FSP)
Applies fine-tuning to align LLMs with task
🔎 Similar Papers
No similar papers found.