Automatic Essay Scoring and Feedback Generation in Basque Language Learning

📅 2025-12-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses automated essay scoring (AES) and pedagogical feedback generation for Basque—a low-resource language. We introduce the first publicly available, expert-annotated CEFR C1-level Basque essay dataset (3,200 essays), annotated across multidimensional quality criteria including accuracy, lexical richness, and coherence, alongside expert feedback and representative error examples. Methodologically, we propose a novel feedback quality evaluation framework integrating automatic consistency assessment with expert validation, and develop interpretable, teaching-oriented AES and feedback generation models via supervised fine-tuning of RoBERTa-EusCrawl and Latxa 8B/70B. Results show that fine-tuned Latxa significantly outperforms GPT-5 and Claude Sonnet 4.5 in scoring consistency and feedback utility, while detecting a broader spectrum of linguistic errors. This work establishes a high-quality benchmark dataset, reproducible methodology, and open-source tools for low-resource NLP research.

Technology Category

Application Category

📝 Abstract
This paper introduces the first publicly available dataset for Automatic Essay Scoring (AES) and feedback generation in Basque, targeting the CEFR C1 proficiency level. The dataset comprises 3,200 essays from HABE, each annotated by expert evaluators with criterion specific scores covering correctness, richness, coherence, cohesion, and task alignment enriched with detailed feedback and error examples. We fine-tune open-source models, including RoBERTa-EusCrawl and Latxa 8B/70B, for both scoring and explanation generation. Our experiments show that encoder models remain highly reliable for AES, while supervised fine-tuning (SFT) of Latxa significantly enhances performance, surpassing state-of-the-art (SoTA) closed-source systems such as GPT-5 and Claude Sonnet 4.5 in scoring consistency and feedback quality. We also propose a novel evaluation methodology for assessing feedback generation, combining automatic consistency metrics with expert-based validation of extracted learner errors. Results demonstrate that the fine-tuned Latxa model produces criterion-aligned, pedagogically meaningful feedback and identifies a wider range of error types than proprietary models. This resource and benchmark establish a foundation for transparent, reproducible, and educationally grounded NLP research in low-resource languages such as Basque.
Problem

Research questions and friction points this paper is trying to address.

Develops first Basque AES dataset for CEFR C1 level
Fine-tunes open-source models for scoring and feedback generation
Proposes evaluation method for feedback quality and error detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuned open-source models for Basque AES
Novel evaluation combining metrics and expert validation
Latxa model surpasses SoTA closed-source systems
🔎 Similar Papers
No similar papers found.
E
Ekhi Azurmendi
HiTZ Center - Ixa, University of the Basque Country UPV/EHU
X
Xabier Arregi
HiTZ Center - Ixa, University of the Basque Country UPV/EHU
Oier Lopez de Lacalle
Oier Lopez de Lacalle
Universtity of the Basque Country
Natural Language ProcessingWord Sense DisambiguationInformation ExtractionRelation Extraction