Human-in-the-Loop LLM Grading for Handwritten Mathematics Assessments

📅 2026-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a human-in-the-loop, end-to-end grading framework to address the challenges of fairness and efficiency in automated scoring of handwritten mathematics assignments. By integrating fine-grained rubrics, multi-round large language model (LLM) evaluations, consistency checks, and mandatory human review for ambiguous cases, the system ensures both accuracy and equity in assessment. The pipeline incorporates automated scanning, anonymization, and collaborative human–AI validation. Deployed across two undergraduate mathematics courses, the framework reduced instructors’ grading time by 23%, achieved inter-rater consistency comparable to or better than fully manual grading, and effectively mitigated model errors. This study establishes a scalable paradigm for the reliable deployment of LLMs in educational assessment contexts.

Technology Category

Application Category

📝 Abstract
Providing timely and individualised feedback on handwritten student work is highly beneficial for learning but difficult to achieve at scale. This challenge has become more pressing as generative AI undermines the reliability of take-home assessments, shifting emphasis toward supervised, in-class evaluation. We present a scalable, end-to-end workflow for LLM-assisted grading of short, pen-and-paper assessments. The workflow spans (1) constructing solution keys, (2) developing detailed rubric-style grading keys used to guide the LLM, and (3) a grading procedure that combines automated scanning and anonymisation, multi-pass LLM scoring, automated consistency checks, and mandatory human verification. We deploy the system in two undergraduate mathematics courses using six low-stakes in-class tests. Empirically, LLM assistance reduces grading time by approximately 23% while achieving agreement comparable to, and in several cases tighter than, fully manual grading. Occasional model errors occur but are effectively contained by the hybrid design. Overall, our results show that carefully embedded human-in-the-loop LLM grading can substantially reduce workload while maintaining fairness and accuracy.
Problem

Research questions and friction points this paper is trying to address.

human-in-the-loop
LLM grading
handwritten mathematics assessments
scalable feedback
in-class evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Human-in-the-Loop
LLM-assisted grading
handwritten mathematics assessment
rubric-based evaluation
automated consistency checking
A
Arne Vanhoyweghen
Data Analytics Lab, Vrije Universiteit Brussel, 1050 Brussel, Belgium
V
Vincent Holst
Data Analytics Lab, Vrije Universiteit Brussel, 1050 Brussel, Belgium
M
Melika Mobini
Data Analytics Lab, Vrije Universiteit Brussel, 1050 Brussel, Belgium
L
Lukas Van de Voorde
Data Analytics Lab, Vrije Universiteit Brussel, 1050 Brussel, Belgium
T
Tibo Vanleke
Data Analytics Lab, Vrije Universiteit Brussel, 1050 Brussel, Belgium
B
Bert Verbruggen
Data Analytics Lab, Vrije Universiteit Brussel, 1050 Brussel, Belgium
B
Brecht Verbeken
Data Analytics Lab, Vrije Universiteit Brussel, 1050 Brussel, Belgium
A
Andres Algaba
Data Analytics Lab, Vrije Universiteit Brussel, 1050 Brussel, Belgium
Sam Verboven
Sam Verboven
Assistant Professor, Vrije Universiteit Brussel
Machine LearningDeep LearningCausality
M
Marie-Anne Guerry
Data Analytics Lab, Vrije Universiteit Brussel, 1050 Brussel, Belgium
F
Filip Van Droogenbroeck
Data Analytics Lab, Vrije Universiteit Brussel, 1050 Brussel, Belgium
Vincent Ginis
Vincent Ginis
Vrije Universiteit Brussel / Harvard University
Physics | Machine Learning