🤖 AI Summary
Existing continual learning methods for temporal knowledge graph reasoning (TKGR) suffer from catastrophic forgetting due to historical semantic loss and conflicts between old and new facts. To address this, we propose a generative self-adaptive replay framework. First, we introduce a novel historical context prompting sampling unit that models the full temporal context. Second, we leverage a pre-trained diffusion model to synthesize historical entity distributions and align their shared characteristics with newly learned distributions under TKGR model guidance. Third, we design a hierarchical self-adaptive replay mechanism to ensure semantically consistent knowledge retention and updating. This work is the first to integrate diffusion-based generation, context-aware prompt learning, and hierarchical distribution alignment into TKGR continual learning. Experiments demonstrate significant mitigation of forgetting: inference accuracy improves substantially across multiple benchmarks, and forgetting rates decrease by over 40%.
📝 Abstract
Recent Continual Learning (CL)-based Temporal Knowledge Graph Reasoning (TKGR) methods focus on significantly reducing computational cost and mitigating catastrophic forgetting caused by fine-tuning models with new data. However, existing CL-based TKGR methods still face two key limitations: (1) They usually one-sidedly reorganize individual historical facts, while overlooking the historical context essential for accurately understanding the historical semantics of these facts; (2) They preserve historical knowledge by simply replaying historical facts, while ignoring the potential conflicts between historical and emerging facts. In this paper, we propose a Deep Generative Adaptive Replay (DGAR) method, which can generate and adaptively replay historical entity distribution representations from the whole historical context. To address the first challenge, historical context prompts as sampling units are built to preserve the whole historical context information. To overcome the second challenge, a pre-trained diffusion model is adopted to generate the historical distribution. During the generation process, the common features between the historical and current distributions are enhanced under the guidance of the TKGR model. In addition, a layer-by-layer adaptive replay mechanism is designed to effectively integrate historical and current distributions. Experimental results demonstrate that DGAR significantly outperforms baselines in reasoning and mitigating forgetting.