🤖 AI Summary
Temporal Graph Neural Networks (Temporal GNNs) face three key bottlenecks in Laplacian positional encoding: high computational cost, weak theoretical foundations, and ambiguous application guidelines. Method: This paper proposes the first unified theoretical framework linking temporal graph slices to hyper-Laplacian encoding; designs an efficient algorithm based on spectral approximation and sparse eigendecomposition, accelerating encoding computation by 56× and scaling to graphs with up to 50,000 nodes; and conducts empirically grounded, cross-architecture adaptation analysis. Contribution/Results: We uncover a critical principle: encoding effectiveness is highly contingent on both model architecture and task type. Experiments on recommendation and traffic forecasting demonstrate significant performance gains. Moreover, we explicitly characterize the marginal utility and applicability conditions of positional encoding across diverse GNN architectures—advancing temporal graph positional encoding from heuristic design toward an interpretable, scalable, and adaptable paradigm.
📝 Abstract
Temporal graph learning has applications in recommendation systems, traffic forecasting, and social network analysis. Although multiple architectures have been introduced, progress in positional encoding for temporal graphs remains limited. Extending static Laplacian eigenvector approaches to temporal graphs through the supra-Laplacian has shown promise, but also poses key challenges: high eigendecomposition costs, limited theoretical understanding, and ambiguity about when and how to apply these encodings. In this paper, we address these issues by (1) offering a theoretical framework that connects supra-Laplacian encodings to per-time-slice encodings, highlighting the benefits of leveraging additional temporal connectivity, (2) introducing novel methods to reduce the computational overhead, achieving up to 56x faster runtimes while scaling to graphs with 50,000 active nodes, and (3) conducting an extensive experimental study to identify which models, tasks, and datasets benefit most from these encodings. Our findings reveal that while positional encodings can significantly boost performance in certain scenarios, their effectiveness varies across different models.