🤖 AI Summary
Existing Korean automated writing evaluation (AWE) tools suffer from inadequate multi-perspective analysis, poor error propagation mitigation, and limited interpretability, resulting in inefficient assessment and delayed feedback. To address these limitations, this paper proposes the first multi-level AWE system specifically designed for Korean writing. Our approach innovatively integrates low-level morphological analysis, mid-level interpretable lexical feature modeling, and high-level educationally grounded rubric-driven scoring. It employs rule-enhanced morphological segmentation, domain-adapted token representation, and hierarchical joint regression/classification modeling. Experimental results demonstrate that the system significantly outperforms established baselines across multiple Korean AWE tasks, achieving substantial improvements in accuracy and quadratic weighted kappa. Moreover, it exhibits high inter-rater consistency and robustness suitable for industrial-scale deployment.
📝 Abstract
Evaluating writing quality is complex and time-consuming often delaying feedback to learners. While automated writing evaluation tools are effective for English, Korean automated writing evaluation tools face challenges due to their inability to address multi-view analysis, error propagation, and evaluation explainability. To overcome these challenges, we introduce UKTA (Unified Korean Text Analyzer), a comprehensive Korea text analysis and writing evaluation system. UKTA provides accurate low-level morpheme analysis, key lexical features for mid-level explainability, and transparent high-level rubric-based writing scores. Our approach enhances accuracy and quadratic weighted kappa over existing baseline, positioning UKTA as a leading multi-perspective tool for Korean text analysis and writing evaluation.