Machine-Assisted Grading of Nationwide School-Leaving Essay Exams with LLMs and Statistical NLP

πŸ“… 2026-01-22
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work proposes an efficient, equitable, and educationally compliant automated scoring approach for high-stakes national graduation writing assessments. Grounded in official curriculum-based rubrics, the system integrates large language models (LLMs) with statistical natural language processing techniques, marking the first deployment of LLM-assisted scoring at a national scale in a low-resource language context. It enables fine-grained subscore evaluation and personalized feedback while ensuring scoring consistency and security through a human-in-the-loop workflow, bias detection mechanisms, prompt injection safeguards, and human oversight. Empirical results demonstrate that the system’s scores align closely with those of human raters, consistently falling within the range of inter-rater variability, thereby validating its feasibility, reliability, and regulatory compliance for nationwide implementation.

Technology Category

Application Category

πŸ“ Abstract
Large language models (LLMs) enable rapid and consistent automated evaluation of open-ended exam responses, including dimensions of content and argumentation that have traditionally required human judgment. This is particularly important in cases where a large amount of exams need to be graded in a limited time frame, such as nation-wide graduation exams in various countries. Here, we examine the applicability of automated scoring on two large datasets of trial exam essays of two full national cohorts from Estonia. We operationalize the official curriculum-based rubric and compare LLM and statistical natural language processing (NLP) based assessments with human panel scores. The results show that automated scoring can achieve performance comparable to that of human raters and tends to fall within the human scoring range. We also evaluate bias, prompt injection risks, and LLMs as essay writers. These findings demonstrate that a principled, rubric-driven, human-in-the-loop scoring pipeline is viable for high-stakes writing assessment, particularly relevant for digitally advanced societies like Estonia, which is about to adapt a fully electronic examination system. Furthermore, the system produces fine-grained subscore profiles that can be used to generate systematic, personalized feedback for instruction and exam preparation. The study provides evidence that LLM-assisted assessment can be implemented at a national scale, even in a small-language context, while maintaining human oversight and compliance with emerging educational and regulatory standards.
Problem

Research questions and friction points this paper is trying to address.

automated essay scoring
large language models
high-stakes assessment
national exams
rubric-based evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

large language models
automated essay scoring
human-in-the-loop
rubric-based assessment
statistical NLP
πŸ”Ž Similar Papers
No similar papers found.
Andres Karjus
Andres Karjus
Tallinn University; Estonian Business School
linguisticsculture and language dynamicscultural data analyticsdigital humanitiesAI
K
Kais Allkivi
Tallinn University
S
Silvia Maine
Tallinn University
K
Katarin Leppik
Tallinn University
K
Krister Kruusmaa
Tallinn University, Institute of the Estonian Language
M
Merilin Aruvee
Tallinn University