🤖 AI Summary
To address the efficiency-expressiveness trade-off in modeling long-range temporal dependencies and capturing fine-grained time patterns in continuous-time dynamic graphs (CTDGs), this paper proposes a dual-level state space model (SSM): a node-level SSM encodes interaction sequences, while a time-level SSM models graph evolution dynamics and adaptively selects salient historical events. The method integrates Mamba-inspired architectural enhancements, continuous-time graph encoding, and a history-aware attention mechanism—achieving linear-time complexity while substantially improving long-range dependency modeling. Evaluated on dynamic link prediction, our approach attains state-of-the-art (SOTA) performance across most benchmarks. Moreover, it significantly reduces GPU memory consumption and inference latency compared to Transformer-based models, effectively balancing computational efficiency with representational capacity.
📝 Abstract
Learning useful representations for continuous-time dynamic graphs (CTDGs) is challenging, due to the concurrent need to span long node interaction histories and grasp nuanced temporal details. In particular, two problems emerge: (1) Encoding longer histories requires more computational resources, making it crucial for CTDG models to maintain low computational complexity to ensure efficiency; (2) Meanwhile, more powerful models are needed to identify and select the most critical temporal information within the extended context provided by longer histories. To address these problems, we propose a CTDG representation learning model named DyGMamba, originating from the popular Mamba state space model (SSM). DyGMamba first leverages a node-level SSM to encode the sequence of historical node interactions. Another time-level SSM is then employed to exploit the temporal patterns hidden in the historical graph, where its output is used to dynamically select the critical information from the interaction history. We validate DyGMamba experimentally on the dynamic link prediction task. The results show that our model achieves state-of-the-art in most cases. DyGMamba also maintains high efficiency in terms of computational resources, making it possible to capture long temporal dependencies with a limited computation budget.