🤖 AI Summary
Existing continual knowledge graph embedding (CKGE) methods face two key bottlenecks: (1) manually designed importance scoring ignores downstream task dependencies, leading to insufficient retention of historical knowledge; and (2) graph-traversal-based importance computation incurs high computational overhead, hindering efficiency and scalability. To address these, we propose the first learnable, task-driven token mechanism that replaces graph traversal and explicit node scoring with lightweight matrix operations, enabling consistent, reusable knowledge alignment and transfer across snapshots. Our framework integrates learnable task tokens, token-masked embedding alignment, parameter sharing, and lightweight transformations. Evaluated on six benchmark datasets, it achieves state-of-the-art or comparable performance while accelerating training by 3.2× and reducing memory consumption by 57%, significantly improving both efficiency and scalability.
📝 Abstract
Continual Knowledge Graph Embedding (CKGE) seeks to integrate new knowledge while preserving past information. However, existing methods struggle with efficiency and scalability due to two key limitations: (1) suboptimal knowledge preservation between snapshots caused by manually designed node/relation importance scores that ignore graph dependencies relevant to the downstream task, and (2) computationally expensive graph traversal for node/relation importance calculation, leading to slow training and high memory overhead. To address these limitations, we introduce ETT-CKGE (Efficient, Task-driven, Tokens for Continual Knowledge Graph Embedding), a novel task-guided CKGE method that leverages efficient task-driven tokens for efficient and effective knowledge transfer between snapshots. Our method introduces a set of learnable tokens that directly capture task-relevant signals, eliminating the need for explicit node scoring or traversal. These tokens serve as consistent and reusable guidance across snapshots, enabling efficient token-masked embedding alignment between snapshots. Importantly, knowledge transfer is achieved through simple matrix operations, significantly reducing training time and memory usage. Extensive experiments across six benchmark datasets demonstrate that ETT-CKGE consistently achieves superior or competitive predictive performance, while substantially improving training efficiency and scalability compared to state-of-the-art CKGE methods. The code is available at: https://github.com/lijingzhu1/ETT-CKGE/tree/main