🤖 AI Summary
To address the high computational overhead in knowledge tracing (KT) caused by large-scale relational graphs and long interaction sequences, this paper proposes DGAKT—a dual-graph attention model based on dynamic subgraph sampling. DGAKT avoids costly global graph modeling by extracting only the local subgraph relevant to the target student-item interaction. It employs a two-level graph attention mechanism to jointly encode sequential student interactions and higher-order structural relationships among students, items, and concepts. This design significantly reduces memory consumption and computational complexity while improving modeling accuracy. DGAKT is the first KT method to achieve both high performance and low resource consumption simultaneously. Extensive experiments demonstrate its superiority over state-of-the-art approaches across multiple benchmark datasets. The work establishes a new paradigm for lightweight and scalable graph-enhanced knowledge tracing.
📝 Abstract
The rise of online learning has led to the development of various knowledge tracing (KT) methods. However, existing methods have overlooked the problem of increasing computational cost when utilizing large graphs and long learning sequences. To address this issue, we introduce Dual Graph Attention-based Knowledge Tracing (DGAKT), a graph neural network model designed to leverage high-order information from subgraphs representing student-exercise-KC relationships. DGAKT incorporates a subgraph-based approach to enhance computational efficiency. By processing only relevant subgraphs for each target interaction, DGAKT significantly reduces memory and computational requirements compared to full global graph models. Extensive experimental results demonstrate that DGAKT not only outperforms existing KT models but also sets a new standard in resource efficiency, addressing a critical need that has been largely overlooked by prior KT approaches.