🤖 AI Summary
This work proposes a human-in-the-loop, end-to-end grading framework to address the challenges of fairness and efficiency in automated scoring of handwritten mathematics assignments. By integrating fine-grained rubrics, multi-round large language model (LLM) evaluations, consistency checks, and mandatory human review for ambiguous cases, the system ensures both accuracy and equity in assessment. The pipeline incorporates automated scanning, anonymization, and collaborative human–AI validation. Deployed across two undergraduate mathematics courses, the framework reduced instructors’ grading time by 23%, achieved inter-rater consistency comparable to or better than fully manual grading, and effectively mitigated model errors. This study establishes a scalable paradigm for the reliable deployment of LLMs in educational assessment contexts.
📝 Abstract
Providing timely and individualised feedback on handwritten student work is highly beneficial for learning but difficult to achieve at scale. This challenge has become more pressing as generative AI undermines the reliability of take-home assessments, shifting emphasis toward supervised, in-class evaluation. We present a scalable, end-to-end workflow for LLM-assisted grading of short, pen-and-paper assessments. The workflow spans (1) constructing solution keys, (2) developing detailed rubric-style grading keys used to guide the LLM, and (3) a grading procedure that combines automated scanning and anonymisation, multi-pass LLM scoring, automated consistency checks, and mandatory human verification. We deploy the system in two undergraduate mathematics courses using six low-stakes in-class tests. Empirically, LLM assistance reduces grading time by approximately 23% while achieving agreement comparable to, and in several cases tighter than, fully manual grading. Occasional model errors occur but are effectively contained by the hybrid design. Overall, our results show that carefully embedded human-in-the-loop LLM grading can substantially reduce workload while maintaining fairness and accuracy.