Measuring the Impact of Student Gaming Behaviors on Learner Modeling

📅 2025-12-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how student gamification behaviors—such as excessive hint usage and random guessing—compromise the reliability of Knowledge Tracing (KT) models, formally modeling them for the first time as Data Poisoning Attacks (DPAs). We propose a scalable DPA simulation framework to systematically quantify their interference with concept mastery estimation, and design an unsupervised anomaly detection method to enhance cross-platform generalization of behavioral identification. Experiments reveal that random guessing inflicts the most severe degradation on KT performance. Our approach achieves cross-platform generalization without labeled data, reducing KT prediction error by 23% on average. The core contribution is establishing the theoretical linkage between gamified behaviors and data poisoning in educational settings, delivering the first integrated framework for behavior-aware modeling and robust detection tailored to KT robustness.

Technology Category

Application Category

📝 Abstract
The expansion of large-scale online education platforms has made vast amounts of student interaction data available for knowledge tracing (KT). KT models estimate students' concept mastery from interaction data, but their performance is sensitive to input data quality. Gaming behaviors, such as excessive hint use, may misrepresent students' knowledge and undermine model reliability. However, systematic investigations of how different types of gaming behaviors affect KT remain scarce, and existing studies rely on costly manual analysis that does not capture behavioral diversity. In this study, we conceptualize gaming behaviors as a form of data poisoning, defined as the deliberate submission of incorrect or misleading interaction data to corrupt a model's learning process. We design Data Poisoning Attacks (DPAs) to simulate diverse gaming patterns and systematically evaluate their impact on KT model performance. Moreover, drawing on advances in DPA detection, we explore unsupervised approaches to enhance the generalizability of gaming behavior detection. We find that KT models' performance tends to decrease especially in response to random guess behaviors. Our findings provide insights into the vulnerabilities of KT models and highlight the potential of adversarial methods for improving the robustness of learning analytics systems.
Problem

Research questions and friction points this paper is trying to address.

Analyzing gaming behaviors' impact on knowledge tracing models
Simulating data poisoning attacks to assess model vulnerabilities
Exploring unsupervised detection for robust learning analytics systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Simulating gaming behaviors as data poisoning attacks
Using unsupervised methods for gaming behavior detection
Evaluating impact on knowledge tracing model performance
🔎 Similar Papers
No similar papers found.