🤖 AI Summary
Existing temporal link prediction (TLP) models rely on memory modules to capture node historical behavior, but their memory embeddings are learned exclusively on training graphs and thus fail to generalize to unseen graphs, leading to poor cross-graph transferability. To address this, we propose the Structural Mapping Module (SMM), the first approach that directly maps topological features—rather than node histories—into memory embeddings, thereby decoupling memory representation from source graph data. Our method integrates graph neural networks, a learnable structural encoder, and a memory-augmented architecture to construct a topology-driven, transfer-friendly representation space. Experiments demonstrate significant performance gains on zero-shot cross-graph TLP tasks. This work establishes the first memory-agnostic foundation model paradigm for dynamic graph learning.
📝 Abstract
Link prediction on graphs has applications spanning from recommender systems to drug discovery. Temporal link prediction (TLP) refers to predicting future links in a temporally evolving graph and adds additional complexity related to the dynamic nature of graphs. State-of-the-art TLP models incorporate memory modules alongside graph neural networks to learn both the temporal mechanisms of incoming nodes and the evolving graph topology. However, memory modules only store information about nodes seen at train time, and hence such models cannot be directly transferred to entirely new graphs at test time and deployment. In this work, we study a new transfer learning task for temporal link prediction, and develop transfer-effective methods for memory-laden models. Specifically, motivated by work showing the informativeness of structural signals for the TLP task, we augment a structural mapping module to the existing TLP model architectures, which learns a mapping from graph structural (topological) features to memory embeddings. Our work paves the way for a memory-free foundation model for TLP.