🤖 AI Summary
Existing dynamic graph models struggle to effectively capture the asymmetric behaviors and temporal evolution between source and target nodes in directed graphs. This work proposes a role-aware Transformer architecture that explicitly decouples node representations by employing separate embedding tables and role-semantic positional encodings for each node type. Furthermore, we introduce Temporal Contrastive Link Prediction (TCLP), a self-supervised pretraining strategy that leverages the full history of unlabeled interactions to learn role-specific representations. To our knowledge, this is the first systematic incorporation of node role awareness into dynamic graph modeling. The proposed approach significantly enhances representation learning under low-label regimes and substantially outperforms state-of-the-art baselines on future link classification tasks, thereby validating the efficacy of explicit role modeling.
📝 Abstract
Real-world dynamic graphs are often directed, with source and destination nodes exhibiting asymmetrical behavioral patterns and temporal dynamics. However, existing dynamic graph architectures largely rely on shared parameters for processing source and destination nodes, with limited or no systematic role-aware modeling. We propose DyGnROLE (Dynamic Graph Node-Role-Oriented Latent Encoding), a transformer-based architecture that explicitly disentangles source and destination representations. By using separate embedding vocabularies and role-semantic positional encodings, the model captures the distinct structural and temporal contexts unique to each role. Critical to the effectiveness of these specialized embeddings in low-label regimes is a self-supervised pretraining objective we introduce: Temporal Contrastive Link Prediction (TCLP). The pretraining uses the full unlabeled interaction history to encode informative structural biases, enabling the model to learn role-specific representations without requiring annotated data. Evaluation on future edge classification demonstrates that DyGnROLE substantially outperforms a diverse set of state-of-the-art baselines, establishing role-aware modeling as an effective strategy for dynamic graph learning.