KeenKT: Knowledge Mastery-State Disambiguation for Knowledge Tracing

📅 2025-12-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In knowledge tracing, point estimation fails to disentangle students’ true proficiency from behavioral noise, resulting in ambiguous mastery state modeling. To address this, we propose the first NIG-KT framework, which models latent knowledge states at each interaction step using the Normal-Inverse Gaussian (NIG) distribution—explicitly decoupling ability estimation from epistemic and aleatoric uncertainty. We further introduce an NIG-distance-based attention mechanism to enhance sensitivity to learning dynamics and transient performance fluctuations. Additionally, we design a diffusion-driven denoising reconstruction objective coupled with distributional contrastive learning to jointly optimize robust state estimation and uncertainty-aware representation. Extensive experiments on six benchmark datasets demonstrate consistent superiority over state-of-the-art methods: AUC improves by up to 5.85% and ACC by up to 6.89%, significantly enhancing both predictive accuracy and robustness to behavioral noise.

Technology Category

Application Category

📝 Abstract
Knowledge Tracing (KT) aims to dynamically model a student's mastery of knowledge concepts based on their historical learning interactions. Most current methods rely on single-point estimates, which cannot distinguish true ability from outburst or carelessness, creating ambiguity in judging mastery. To address this issue, we propose a Knowledge Mastery-State Disambiguation for Knowledge Tracing model (KeenKT), which represents a student's knowledge state at each interaction using a Normal-Inverse-Gaussian (NIG) distribution, thereby capturing the fluctuations in student learning behaviors. Furthermore, we design an NIG-distance-based attention mechanism to model the dynamic evolution of the knowledge state. In addition, we introduce a diffusion-based denoising reconstruction loss and a distributional contrastive learning loss to enhance the model's robustness. Extensive experiments on six public datasets demonstrate that KeenKT outperforms SOTA KT models in terms of prediction accuracy and sensitivity to behavioral fluctuations. The proposed method yields the maximum AUC improvement of 5.85% and the maximum ACC improvement of 6.89%.
Problem

Research questions and friction points this paper is trying to address.

Models student knowledge mastery with uncertainty to reduce ambiguity
Captures learning behavior fluctuations using Normal-Inverse-Gaussian distribution
Enhances robustness with denoising and contrastive learning losses
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Normal-Inverse-Gaussian distribution for knowledge state representation
Employs NIG-distance-based attention for dynamic state evolution
Incorporates diffusion denoising and contrastive learning for robustness
🔎 Similar Papers
No similar papers found.
Zhifei Li
Zhifei Li
Research Scientist at Google
machine translationnatural language processingmachine learningwireless networks
L
Lifan Chen
School of Computer Science, Hubei University, Wuhan 430062, China
J
Jiali Yi
School of Computer Science, Hubei University, Wuhan 430062, China
X
Xiaoju Hou
Institute of Vocational Education, Guangdong Industry Polytechnic University, Guangzhou 510300, China
Y
Yue Zhao
Shandong Police College, Ji’nan 250200, China
W
Wenxin Huang
School of Computer Science, Hubei University, Wuhan 430062, China
M
Miao Zhang
School of Computer Science, Hubei University, Wuhan 430062, China
K
Kui Xiao
School of Computer Science, Hubei University, Wuhan 430062, China
B
Bing Yang
School of Computer Science, Hubei University, Wuhan 430062, China