🤖 AI Summary
This work proposes UOT-ERRANT, a novel evaluation metric for grammatical error correction (GEC) that addresses the limitations of embedding-based metrics like BERTScore, which underperform in high-edit-density fluency scenarios due to their neglect of unmodified tokens. The method first leverages ERRANT to generate edit sequences and construct edit-level vector representations, then employs unbalanced optimal transport (UOT) to achieve an interpretable soft alignment between hypothesis and reference edits. By integrating edit-level representations with UOT for the first time, UOT-ERRANT significantly outperforms existing metrics under the SEEDA meta-evaluation framework—particularly in +Fluency settings—and enables fine-grained system analysis and ranking.
📝 Abstract
Automatic evaluation in grammatical error correction (GEC) is crucial for selecting the best-performing systems. Currently, reference-based metrics are a popular choice, which basically measure the similarity between hypothesis and reference sentences. However, similarity measures based on embeddings, such as BERTScore, are often ineffective, since many words in the source sentences remain unchanged in both the hypothesis and the reference. This study focuses on edits specifically designed for GEC, i.e., ERRANT, and computes similarity measured over the edits from the source sentence. To this end, we propose edit vector, a representation for an edit, and introduce a new metric, UOT-ERRANT, which transports these edit vectors from hypothesis to reference using unbalanced optimal transport. Experiments with SEEDA meta-evaluation show that UOT-ERRANT improves evaluation performance, particularly in the +Fluency domain where many edits occur. Moreover, our method is highly interpretable because the transport plan can be interpreted as a soft edit alignment, making UOT-ERRANT a useful metric for both system ranking and analyzing GEC systems. Our code is available from https://github.com/gotutiyan/uot-errant.