🤖 AI Summary
This study addresses the output uncertainty of large language models (LLMs) in automated scoring, which may adversely affect educational decision-making and learning outcomes. It presents the first systematic evaluation of multiple uncertainty quantification methods within the context of educational scoring, conducting comprehensive experiments across diverse datasets, LLM families, and decoding strategies. The work analyzes how model architecture, task characteristics, and generation settings influence the reliability of uncertainty estimates. By revealing performance disparities and contextual applicability among different uncertainty metrics, this research provides both theoretical grounding and practical guidance for developing dependable, uncertainty-aware automated scoring systems that can support trustworthy educational applications.
📝 Abstract
The rapid rise of large language models (LLMs) is reshaping the landscape of automatic assessment in education. While these systems demonstrate substantial advantages in adaptability to diverse question types and flexibility in output formats, they also introduce new challenges related to output uncertainty, stemming from the inherently probabilistic nature of LLMs. Output uncertainty is an inescapable challenge in automatic assessment, as assessment results often play a critical role in informing subsequent pedagogical actions, such as providing feedback to students or guiding instructional decisions. Unreliable or poorly calibrated uncertainty estimates can lead to unstable downstream interventions, potentially disrupting students' learning processes and resulting in unintended negative consequences. To systematically understand this challenge and inform future research, we benchmark a broad range of uncertainty quantification methods in the context of LLM-based automatic assessment. Although the effectiveness of these methods has been demonstrated in many tasks across other domains, their applicability and reliability in educational settings, particularly for automatic grading, remain underexplored. Through comprehensive analyses of uncertainty behaviors across multiple assessment datasets, LLM families, and generation control settings, we characterize the uncertainty patterns exhibited by LLMs in grading scenarios. Based on these findings, we evaluate the strengths and limitations of different uncertainty metrics and analyze the influence of key factors, including model families, assessment tasks, and decoding strategies, on uncertainty estimates. Our study provides actionable insights into the characteristics of uncertainty in LLM-based automatic assessment and lays the groundwork for developing more reliable and effective uncertainty-aware grading systems in the future.