🤖 AI Summary
Knowledge tracing (KT) models often fail to detect students’ erroneous selections of distractors, leaving latent misconceptions undiagnosed. To address this, we introduce predictive uncertainty modeling into KT for the first time, leveraging probabilistic deep learning to quantify per-prediction confidence. Experiments demonstrate a statistically significant positive correlation between high uncertainty and model misclassification (p < 0.01), enabling effective identification of student cognitive biases. Our approach requires no additional annotations or pedagogical interventions, yielding interpretable instructional signals that support precise diagnosis and adaptive remediation—even under resource constraints. Key contributions are: (1) the first uncertainty-aware KT framework; (2) empirical validation that uncertainty serves as a reliable proxy for prediction errors; and (3) a novel, operationally viable analytical dimension for trustworthy educational AI—balancing reliability with practical deployability.
📝 Abstract
The main focus of research on Knowledge Tracing (KT) models is on model developments with the aim of improving predictive accuracy. Most of these models make the most incorrect predictions when students choose a distractor, leading to student errors going undetected. We present an approach to add new capabilities to KT models by capturing predictive uncertainty and demonstrate that a larger predictive uncertainty aligns with model incorrect predictions. We show that uncertainty in KT models is informative and that this signal would be pedagogically useful for application in an educational learning platform that can be used in a limited resource setting where understanding student ability is necessary.