π€ AI Summary
This work addresses the inconsistency and unreliability of existing methods for evaluating automatic metrics designed for machine translation (MT) error detection. The authors propose a meta-evaluation framework tailored to span-level MT error detection, whose key innovation is the βMatch with Partial Overlap and Partial Scoringβ (MPP) strategy, combined with micro-averaged statistics to effectively resolve matching ambiguities in error localization. Through systematic comparisons of various implementations of precision, recall, and F-score, and by incorporating error-type and severity annotations, the study demonstrates the robustness and soundness of MPP. Experiments reveal critical flaws in prevailing evaluation approaches and enable a comprehensive, reliable assessment of state-of-the-art MT error detection systems based on the proposed framework.
π Abstract
Machine Translation (MT) and automatic MT evaluation have improved dramatically in recent years, enabling numerous novel applications. Automatic evaluation techniques have evolved from producing scalar quality scores to precisely locating translation errors and assigning them error categories and severity levels. However, it remains unclear how to reliably measure the evaluation capabilities of auto-evaluators that do error detection, as no established technique exists in the literature. This work investigates different implementations of span-level precision, recall, and F-score, showing that seemingly similar approaches can yield substantially different rankings, and that certain widely-used techniques are unsuitable for evaluating MT error detection. We propose "match with partial overlap and partial credit" (MPP) with micro-averaging as a robust meta-evaluation strategy and release code for its use publicly. Finally, we use MPP to assess the state of the art in MT error detection.