MiNT: Multi-Network Training for Transfer Learning on Temporal Graphs

📅 2024-06-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the weak generalization of single-network pretraining and the difficulty of zero-shot transfer to unseen temporal graphs in temporal graph transfer learning, this paper proposes MiNT, a multi-network collaborative pretraining framework. Methodologically, MiNT introduces the first joint pretraining paradigm for multiple temporal graphs, integrating parameter sharing, dynamic time alignment, and cross-graph contrastive learning; establishes the first large-scale benchmark comprising 84 temporal transaction networks; and designs a rigorous zero-shot transfer evaluation protocol. Key contributions include: (1) revealing a strong positive correlation between the number of pretraining networks and downstream transfer performance; (2) achieving state-of-the-art zero-shot inference accuracy on 20 unseen networks—significantly outperforming single-network baselines; and (3) demonstrating that scaling pretraining to 64 networks consistently improves performance, validating the efficacy of large-scale multi-network pretraining.

Technology Category

Application Category

📝 Abstract
Temporal Graph Learning (TGL) has become a robust framework for discovering patterns in dynamic networks and predicting future interactions. While existing research has largely concentrated on learning from individual networks, this study explores the potential of learning from multiple temporal networks and its ability to transfer to unobserved networks. To achieve this, we introduce Temporal Multi-network Training MiNT, a novel pre-training approach that learns from multiple temporal networks. With a novel collection of 84 temporal transaction networks, we pre-train TGL models on up to 64 networks and assess their transferability to 20 unseen networks. Remarkably, MiNT achieves state-of-the-art results in zero-shot inference, surpassing models individually trained on each network. Our findings further demonstrate that increasing the number of pre-training networks significantly improves transfer performance. This work lays the groundwork for developing Temporal Graph Foundation Models, highlighting the significant potential of multi-network pre-training in TGL.
Problem

Research questions and friction points this paper is trying to address.

Transfer learning on multiple temporal networks
Pre-training models for unseen temporal graphs
Improving zero-shot inference with multi-network training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-network pre-training for temporal graphs
Zero-shot transfer learning on unseen networks
State-of-the-art temporal graph foundation models
🔎 Similar Papers
No similar papers found.