🤖 AI Summary
Dynamic temporal graphs pose challenges in representing newly emerging nodes and are highly susceptible to structural noise. To address these issues, this paper proposes GTGIB, an inductive dynamic graph representation learning framework. GTGIB innovatively integrates graph structure learning with the temporal information bottleneck principle, introducing a two-stage structural enhancement mechanism: first jointly optimizing dynamic neighborhoods and edge weights to suppress noise, then deriving a tractable objective function via variational approximation to jointly regularize edge connectivity and node features. Crucially, GTGIB supports fully inductive inference—unseen nodes can be represented without retraining. Extensive experiments on four real-world dynamic graph datasets for link prediction demonstrate that GTGIB consistently outperforms state-of-the-art methods under both inductive and transductive settings, validating its robustness and strong generalization capability.
📝 Abstract
Temporal graph learning is crucial for dynamic networks where nodes and edges evolve over time and new nodes continuously join the system. Inductive representation learning in such settings faces two major challenges: effectively representing unseen nodes and mitigating noisy or redundant graph information. We propose GTGIB, a versatile framework that integrates Graph Structure Learning (GSL) with Temporal Graph Information Bottleneck (TGIB). We design a novel two-step GSL-based structural enhancer to enrich and optimize node neighborhoods and demonstrate its effectiveness and efficiency through theoretical proofs and experiments. The TGIB refines the optimized graph by extending the information bottleneck principle to temporal graphs, regularizing both edges and features based on our derived tractable TGIB objective function via variational approximation, enabling stable and efficient optimization. GTGIB-based models are evaluated to predict links on four real-world datasets; they outperform existing methods in all datasets under the inductive setting, with significant and consistent improvement in the transductive setting.