🤖 AI Summary
Existing static network models fail to capture causal evolution and historical dependencies driven by temporal events in dynamic networks. To address this, we propose a generative model based on a modified Hawkes process that— for the first time—explicitly incorporates learnable time-varying covariates (e.g., dynamic graph features) into the intensity function, enabling unified modeling of heterogeneous edge activation processes. To improve simulation efficiency, we design an adaptive thinning algorithm for fast, unbiased sampling of continuous-time point processes. Furthermore, we introduce the first systematic evaluation framework tailored to dynamic network generation. Experiments on diverse real-world and synthetic datasets demonstrate state-of-the-art fidelity, with inference speedups of 3–5× over baselines, significantly enhancing scalability and practical applicability.
📝 Abstract
Temporal networks allow representing connections between objects while incorporating the temporal dimension. While static network models can capture unchanging topological regularities, they often fail to model the effects associated with the causal generative process of the network that occurs in time. Hence, exploiting the temporal aspect of networks has been the focus of many recent studies. In this context, we propose a new framework for generative models of continuous-time temporal networks. We assume that the activation of the edges in a temporal network is driven by a specified temporal point process. This approach allows to directly model the waiting time between events while incorporating time-varying history-based features as covariates in the predictions. Coupled with a thinning algorithm designed for the simulation of point processes, SimHawNet enables simulation of the evolution of temporal networks in continuous time. Finally, we introduce a comprehensive evaluation framework to assess the performance of such an approach, in which we demonstrate that SimHawNet successfully simulates the evolution of networks with very different generative processes and achieves performance comparable to the state of the art, while being significantly faster.