A Generative Adaptive Replay Continual Learning Model for Temporal Knowledge Graph Reasoning

📅 2025-06-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing continual learning methods for temporal knowledge graph reasoning (TKGR) suffer from catastrophic forgetting due to historical semantic loss and conflicts between old and new facts. To address this, we propose a generative self-adaptive replay framework. First, we introduce a novel historical context prompting sampling unit that models the full temporal context. Second, we leverage a pre-trained diffusion model to synthesize historical entity distributions and align their shared characteristics with newly learned distributions under TKGR model guidance. Third, we design a hierarchical self-adaptive replay mechanism to ensure semantically consistent knowledge retention and updating. This work is the first to integrate diffusion-based generation, context-aware prompt learning, and hierarchical distribution alignment into TKGR continual learning. Experiments demonstrate significant mitigation of forgetting: inference accuracy improves substantially across multiple benchmarks, and forgetting rates decrease by over 40%.

Technology Category

Application Category

📝 Abstract
Recent Continual Learning (CL)-based Temporal Knowledge Graph Reasoning (TKGR) methods focus on significantly reducing computational cost and mitigating catastrophic forgetting caused by fine-tuning models with new data. However, existing CL-based TKGR methods still face two key limitations: (1) They usually one-sidedly reorganize individual historical facts, while overlooking the historical context essential for accurately understanding the historical semantics of these facts; (2) They preserve historical knowledge by simply replaying historical facts, while ignoring the potential conflicts between historical and emerging facts. In this paper, we propose a Deep Generative Adaptive Replay (DGAR) method, which can generate and adaptively replay historical entity distribution representations from the whole historical context. To address the first challenge, historical context prompts as sampling units are built to preserve the whole historical context information. To overcome the second challenge, a pre-trained diffusion model is adopted to generate the historical distribution. During the generation process, the common features between the historical and current distributions are enhanced under the guidance of the TKGR model. In addition, a layer-by-layer adaptive replay mechanism is designed to effectively integrate historical and current distributions. Experimental results demonstrate that DGAR significantly outperforms baselines in reasoning and mitigating forgetting.
Problem

Research questions and friction points this paper is trying to address.

Overcoming historical context neglect in TKGR methods
Resolving conflicts between historical and emerging facts
Enhancing continual learning for temporal knowledge graphs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates historical entity distribution representations
Uses historical context prompts as sampling units
Employs layer-by-layer adaptive replay mechanism
🔎 Similar Papers
No similar papers found.
Zhiyu Zhang
Zhiyu Zhang
Postdoc, Carnegie Mellon University
Machine LearningOptimizationStatistics
W
Wei Chen
Guilin University of Electronic Technology, School of Computer Science and Information Security, Guangxi, China
Y
Youfang Lin
School of Computer and Information Technology, Beijing Jiaotong University, Beijing, China; Beijing Key Laboratory of Traffic Data Analysis and Mining, Beijing, China
H
Huaiyu Wan
School of Computer and Information Technology, Beijing Jiaotong University, Beijing, China; Beijing Key Laboratory of Traffic Data Analysis and Mining, Beijing, China