🤖 AI Summary
Existing grammatical error correction (GEC) evaluation methods are predominantly English-centric, rely on strict edit alignment, and assume a single reference output—limiting their applicability to multilingual settings and generative models. This work introduces the first fluency-oriented, multi-reference evaluation framework, formalizing n-gram similarity as an aggregation problem over diverse linguistically valid corrections. We theoretically analyze four aggregation strategies—select-best, simple average, weighted average, and merged-counts—characterizing their boundedness, monotonicity, and robustness, and instantiate them as multi-reference variants of GLEU. Experiments across Czech, Estonian, Ukrainian, and Chinese GEC datasets demonstrate that these strategies exhibit complementary trade-offs between fluency preservation and coverage, collectively enhancing evaluation reasonableness, diversity, and cross-lingual adaptability.
📝 Abstract
Evaluating grammatical error correction requires metrics that reflect the diversity of valid human corrections rather than privileging a single reference. Existing frameworks, largely edit-based and English-centric, rely on rigid alignments between system and reference edits, limiting their applicability in multilingual and generative settings. This paper introduces a formal framework for extit{fluency-based multi-reference evaluation}, framing $n$-gram similarity as an aggregation problem over multiple legitimate corrections. Within this formulation, we instantiate GLEU through four aggregation strategies-- extsc{select-best}, extsc{simple-average}, extsc{weighted-average}, and extsc{merged-counts}--and analyze their properties of boundedness, monotonicity, and sensitivity to reference variation. Empirical results on Czech, Estonian, Ukrainian, and Chinese corpora show that these strategies capture complementary aspects of fluency and coverage. The framework unifies multi-reference evaluation into a principled, fluency-oriented approach that incorporates linguistic diversity without penalizing legitimate variation.