๐ค AI Summary
Existing AI automated scoring systems, while efficient, fail to adequately characterize scoring uncertainty and inter-annotator disagreement. To address this, we propose *semantic entropy*โa novel uncertainty metric grounded in the reasoning process rather than the final score. Specifically, we prompt GPT-4 to generate multiple justifications for scoring the same short response; these justifications are clustered via entailment-based semantic similarity, and inter-cluster information entropy is computed to quantify explanatory diversity. Crucially, semantic entropy directly links semantic-level reasoning inconsistency with human scoring disagreementโa first in automated assessment. Empirical validation on the ASAP-SAS dataset demonstrates a statistically significant correlation (p < 0.01) between semantic entropy and actual score discrepancies. Moreover, the metric exhibits robustness across disciplines and task types. By grounding uncertainty in interpretable reasoning patterns, semantic entropy enhances the transparency and trustworthiness of AI grading and establishes a new paradigm for human-AI collaborative decision-making in educational assessment.
๐ Abstract
Automated grading systems can efficiently score short-answer responses, yet they often fail to indicate when a grading decision is uncertain or potentially contentious. We introduce semantic entropy, a measure of variability across multiple GPT-4-generated explanations for the same student response, as a proxy for human grader disagreement. By clustering rationales via entailment-based similarity and computing entropy over these clusters, we quantify the diversity of justifications without relying on final output scores. We address three research questions: (1) Does semantic entropy align with human grader disagreement? (2) Does it generalize across academic subjects? (3) Is it sensitive to structural task features such as source dependency? Experiments on the ASAP-SAS dataset show that semantic entropy correlates with rater disagreement, varies meaningfully across subjects, and increases in tasks requiring interpretive reasoning. Our findings position semantic entropy as an interpretable uncertainty signal that supports more transparent and trustworthy AI-assisted grading workflows.