๐ค AI Summary
This work addresses the limitations of sinusoidal time encoding in dynamic graph learningโnamely, temporal information loss, high-dimensional redundancy, and low modeling efficiency. We propose Lightweight Linear Time Encoding (LTC), enabling self-attention mechanisms to directly model temporal intervals and evolutionary patterns. LTC is the first systematically validated linear time encoding for temporal modeling on dynamic graphs, eliminating information distortion caused by nonlinear mappings while substantially reducing encoding dimensionality. It integrates seamlessly with mainstream architectures such as TGAT and DyGFormer. On six benchmark datasets, LTC yields consistent average accuracy improvements. Notably, replacing 100-dimensional sinusoidal encoding with 2-dimensional LTC in TGAT reduces parameter count by 43% and outperforms the baseline on five tasks. Our core contribution is establishing linear time encoding as an efficient, low-dimensional, and interpretable paradigm for temporal modeling in dynamic graphs.
๐ Abstract
Dynamic graph learning is essential for applications involving temporal networks and requires effective modeling of temporal relationships. Seminal attention-based models like TGAT and DyGFormer rely on sinusoidal time encoders to capture temporal relationships between edge events. In this paper, we study a simpler alternative: the linear time encoder, which avoids temporal information loss caused by sinusoidal functions and reduces the need for high dimensional time encoders. We show that the self-attention mechanism can effectively learn to compute time spans from linear time encodings and extract relevant temporal patterns. Through extensive experiments on six dynamic graph datasets, we demonstrate that the linear time encoder improves the performance of TGAT and DyGFormer in most cases. Moreover, the linear time encoder can lead to significant savings in model parameters with minimal performance loss. For example, compared to a 100-dimensional sinusoidal time encoder, TGAT with a 2-dimensional linear time encoder saves 43% of parameters and achieves higher average precision on five datasets. These results can be readily used to positively impact the design choices of a wide variety of dynamic graph learning architectures. The experimental code is available at: https://github.com/hsinghuan/dg-linear-time.git.