🤖 AI Summary
This study addresses automated essay scoring (AES) and pedagogical feedback generation for Basque—a low-resource language. We introduce the first publicly available, expert-annotated CEFR C1-level Basque essay dataset (3,200 essays), annotated across multidimensional quality criteria including accuracy, lexical richness, and coherence, alongside expert feedback and representative error examples. Methodologically, we propose a novel feedback quality evaluation framework integrating automatic consistency assessment with expert validation, and develop interpretable, teaching-oriented AES and feedback generation models via supervised fine-tuning of RoBERTa-EusCrawl and Latxa 8B/70B. Results show that fine-tuned Latxa significantly outperforms GPT-5 and Claude Sonnet 4.5 in scoring consistency and feedback utility, while detecting a broader spectrum of linguistic errors. This work establishes a high-quality benchmark dataset, reproducible methodology, and open-source tools for low-resource NLP research.
📝 Abstract
This paper introduces the first publicly available dataset for Automatic Essay Scoring (AES) and feedback generation in Basque, targeting the CEFR C1 proficiency level. The dataset comprises 3,200 essays from HABE, each annotated by expert evaluators with criterion specific scores covering correctness, richness, coherence, cohesion, and task alignment enriched with detailed feedback and error examples. We fine-tune open-source models, including RoBERTa-EusCrawl and Latxa 8B/70B, for both scoring and explanation generation. Our experiments show that encoder models remain highly reliable for AES, while supervised fine-tuning (SFT) of Latxa significantly enhances performance, surpassing state-of-the-art (SoTA) closed-source systems such as GPT-5 and Claude Sonnet 4.5 in scoring consistency and feedback quality. We also propose a novel evaluation methodology for assessing feedback generation, combining automatic consistency metrics with expert-based validation of extracted learner errors. Results demonstrate that the fine-tuned Latxa model produces criterion-aligned, pedagogically meaningful feedback and identifies a wider range of error types than proprietary models. This resource and benchmark establish a foundation for transparent, reproducible, and educationally grounded NLP research in low-resource languages such as Basque.