Knowledge Distillation for Temporal Knowledge Graph Reasoning with Large Language Models

๐Ÿ“… 2026-01-01
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the high computational cost of existing methods for temporal knowledge graph reasoning, which hinders deployment in resource-constrained environments, and the inability of conventional knowledge distillation to preserve temporal dependencies. To overcome these limitations, the paper proposes a novel distillation framework that, for the first time, leverages a large language model as the teacher to transfer both structural and temporal-aware reasoning capabilities to a lightweight student model. By integrating general semantic knowledge with dynamic temporal information, the approach substantially enhances the studentโ€™s capacity for temporal modeling. Experimental results across multiple benchmark datasets demonstrate that the proposed framework outperforms strong baselines, achieving a favorable balance among inference accuracy, computational efficiency, and practical deployability.

Technology Category

Application Category

๐Ÿ“ Abstract
Reasoning over temporal knowledge graphs (TKGs) is fundamental to improving the efficiency and reliability of intelligent decision-making systems and has become a key technological foundation for future artificial intelligence applications. Despite recent progress, existing TKG reasoning models typically rely on large parameter sizes and intensive computation, leading to high hardware costs and energy consumption. These constraints hinder their deployment on resource-constrained, low-power, and distributed platforms that require real-time inference. Moreover, most existing model compression and distillation techniques are designed for static knowledge graphs and fail to adequately capture the temporal dependencies inherent in TKGs, often resulting in degraded reasoning performance. To address these challenges, we propose a distillation framework specifically tailored for temporal knowledge graph reasoning. Our approach leverages large language models as teacher models to guide the distillation process, enabling effective transfer of both structural and temporal reasoning capabilities to lightweight student models. By integrating large-scale public knowledge with task-specific temporal information, the proposed framework enhances the student model's ability to model temporal dynamics while maintaining a compact and efficient architecture. Extensive experiments on multiple publicly available benchmark datasets demonstrate that our method consistently outperforms strong baselines, achieving a favorable trade-off between reasoning accuracy, computational efficiency, and practical deployability.
Problem

Research questions and friction points this paper is trying to address.

Temporal Knowledge Graph
Knowledge Distillation
Model Compression
Reasoning Efficiency
Temporal Dependencies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Temporal Knowledge Graph
Knowledge Distillation
Large Language Models
Model Compression
Temporal Reasoning
๐Ÿ”Ž Similar Papers
No similar papers found.
W
Wang Xing
School of Computer Science and Technology, Xidian University
W
Wei Song
School of Computing and Artificial Intelligence, Southwest Jiaotong University
Siyu Lin
Siyu Lin
Beijing Jiaotong University
Wireless communicaions
C
Chen Wu
School of Computing and Artificial Intelligence, Southwest Jiaotong University
Z
Zhesi Li
School of Information Engineering, Changโ€™an University
M
Man Wang
School of Computing and Artificial Intelligence, Southwest Jiaotong University