🤖 AI Summary
For large-scale temporal graph dynamic link prediction, this paper departs from conventional node-embedding paradigms and pioneers the extension of Neural Common Neighbor (NCN) modeling to temporal settings, directly capturing the evolving common-neighbor relationships between source–target node pairs. Methodologically, it introduces: (1) a dynamic temporal adjacency dictionary to explicitly encode pairwise structural evolution over time; (2) a multi-hop common-neighbor sampling and aggregation strategy; and (3) a lightweight neural interaction module. Evaluated on five real-world benchmarks from the Temporal Graph Benchmark (TGB), the model achieves state-of-the-art (SOTA) performance on three datasets and attains up to 6.4× faster inference speed compared to mainstream GNN-based baselines—demonstrating a substantial trade-off improvement between accuracy and efficiency.
📝 Abstract
Temporal graphs are ubiquitous in real-world scenarios, such as social network, trade and transportation. Predicting dynamic links between nodes in a temporal graph is of vital importance. Traditional methods usually leverage the temporal neighborhood of interaction history to generate node embeddings first and then aggregate the source and target node embeddings to predict the link. However, such methods focus on learning individual node representations, but overlook the pairwise representation learning nature of link prediction and fail to capture the important pairwise features of links such as common neighbors (CN). Motivated by the success of Neural Common Neighbor (NCN) for static graph link prediction, we propose TNCN, a temporal version of NCN for link prediction in temporal graphs. TNCN dynamically updates a temporal neighbor dictionary for each node, and utilizes multi-hop common neighbors between the source and target node to learn a more effective pairwise representation. We validate our model on five large-scale real-world datasets from the Temporal Graph Benchmark (TGB), and find that it achieves new state-of-the-art performance on three of them. Additionally, TNCN demonstrates excellent scalability on large datasets, outperforming popular GNN baselines by up to 6.4 times in speed. Our code is available at https: //github.com/GraphPKU/TNCN.