🤖 AI Summary
To address the scarcity of learner corpora, coarse-grained annotations, and inconsistent evaluation criteria in Korean-as-a-second-language (L2) writing research, this study introduces KoLLA, an enhanced Korean L2 writing corpus. KoLLA features the first multi-reference grammatical error correction (GEC) annotation scheme and fine-grained human scoring—aligned with the National Institute of Korean Language’s standardized rubric—for grammatical accuracy, coherence, and lexical diversity. Annotation employs multi-expert collaboration, rubric-driven scoring, inter-annotator consistency checks, and cross-reference discrepancy analysis to balance linguistic variability and assessment objectivity. Empirical evaluation demonstrates that KoLLA significantly improves the robustness of GEC models and the precision of educational outcome quantification. As the first benchmark resource integrating multi-reference GEC annotations with standardized, rubric-based scoring, KoLLA advances Korean L2 writing research, automated assessment, and pedagogical feedback.
📝 Abstract
Despite growing global interest in Korean language education, there remains a significant lack of learner corpora tailored to Korean L2 writing. To address this gap, we enhance the KoLLA Korean learner corpus by adding multiple grammatical error correction (GEC) references, thereby enabling more nuanced and flexible evaluation of GEC systems, and reflects the variability of human language. Additionally, we enrich the corpus with rubric-based scores aligned with guidelines from the Korean National Language Institute, capturing grammatical accuracy, coherence, and lexical diversity. These enhancements make KoLLA a robust and standardized resource for research in Korean L2 education, supporting advancements in language learning, assessment, and automated error correction.