Efficient Knowledge Tracing Leveraging Higher-Order Information in Integrated Graphs

📅 2025-07-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational overhead in knowledge tracing (KT) caused by large-scale relational graphs and long interaction sequences, this paper proposes DGAKT—a dual-graph attention model based on dynamic subgraph sampling. DGAKT avoids costly global graph modeling by extracting only the local subgraph relevant to the target student-item interaction. It employs a two-level graph attention mechanism to jointly encode sequential student interactions and higher-order structural relationships among students, items, and concepts. This design significantly reduces memory consumption and computational complexity while improving modeling accuracy. DGAKT is the first KT method to achieve both high performance and low resource consumption simultaneously. Extensive experiments demonstrate its superiority over state-of-the-art approaches across multiple benchmark datasets. The work establishes a new paradigm for lightweight and scalable graph-enhanced knowledge tracing.

Technology Category

Application Category

📝 Abstract
The rise of online learning has led to the development of various knowledge tracing (KT) methods. However, existing methods have overlooked the problem of increasing computational cost when utilizing large graphs and long learning sequences. To address this issue, we introduce Dual Graph Attention-based Knowledge Tracing (DGAKT), a graph neural network model designed to leverage high-order information from subgraphs representing student-exercise-KC relationships. DGAKT incorporates a subgraph-based approach to enhance computational efficiency. By processing only relevant subgraphs for each target interaction, DGAKT significantly reduces memory and computational requirements compared to full global graph models. Extensive experimental results demonstrate that DGAKT not only outperforms existing KT models but also sets a new standard in resource efficiency, addressing a critical need that has been largely overlooked by prior KT approaches.
Problem

Research questions and friction points this paper is trying to address.

Reducing computational cost in large knowledge graphs
Improving efficiency with subgraph-based processing
Enhancing performance in knowledge tracing models
Innovation

Methods, ideas, or system contributions that make the work stand out.

DGAKT uses dual graph attention networks
Leverages high-order subgraph information
Enhances efficiency with subgraph processing
🔎 Similar Papers
No similar papers found.
Donghee Han
Donghee Han
KAIST GSDS
Daehee Kim
Daehee Kim
NAVER Cloud
Deep LearningVision and LanguageOptical Character RecognitionDomain Generalization
M
Minjun Lee
Data Platform Team, SOCAR, Seoul, Republic of Korea
D
Daeyoung Roh
Graduate School of Data Science, KAIST, Daejeon, Republic of Korea
K
Keejun Han
Division of Computer Engineering, Hansung University, Seoul, Republic of Korea
M
Mun Yong Yi
Department of Industrial and Systems Engineering, KAIST, Daejeon, Republic of Korea