Span-Level Machine Translation Meta-Evaluation

πŸ“… 2026-03-20
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the inconsistency and unreliability of existing methods for evaluating automatic metrics designed for machine translation (MT) error detection. The authors propose a meta-evaluation framework tailored to span-level MT error detection, whose key innovation is the β€œMatch with Partial Overlap and Partial Scoring” (MPP) strategy, combined with micro-averaged statistics to effectively resolve matching ambiguities in error localization. Through systematic comparisons of various implementations of precision, recall, and F-score, and by incorporating error-type and severity annotations, the study demonstrates the robustness and soundness of MPP. Experiments reveal critical flaws in prevailing evaluation approaches and enable a comprehensive, reliable assessment of state-of-the-art MT error detection systems based on the proposed framework.

Technology Category

Application Category

πŸ“ Abstract
Machine Translation (MT) and automatic MT evaluation have improved dramatically in recent years, enabling numerous novel applications. Automatic evaluation techniques have evolved from producing scalar quality scores to precisely locating translation errors and assigning them error categories and severity levels. However, it remains unclear how to reliably measure the evaluation capabilities of auto-evaluators that do error detection, as no established technique exists in the literature. This work investigates different implementations of span-level precision, recall, and F-score, showing that seemingly similar approaches can yield substantially different rankings, and that certain widely-used techniques are unsuitable for evaluating MT error detection. We propose "match with partial overlap and partial credit" (MPP) with micro-averaging as a robust meta-evaluation strategy and release code for its use publicly. Finally, we use MPP to assess the state of the art in MT error detection.
Problem

Research questions and friction points this paper is trying to address.

machine translation
error detection
meta-evaluation
span-level evaluation
automatic evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

span-level evaluation
machine translation error detection
meta-evaluation
partial overlap matching
micro-averaged F-score
πŸ”Ž Similar Papers
No similar papers found.