π€ AI Summary
Long-term forecasting of irregularly spaced temporal event sequences remains challenging for existing autoregressive models, which suffer from error accumulation and myopic prediction horizons. To address this, we propose EventFlow, the first non-autoregressive generative model for continuous-time event modeling that introduces flow matching to directly learn the joint distribution over event timesβenabling likelihood-free, end-to-end joint modeling. EventFlow parameterizes the velocity field via neural ordinary differential equations (neural ODEs), integrating event-time embeddings and temporal encodings to support efficient sampling, stable training, and both unconditional and conditional generation. On multiple standard benchmarks, EventFlow achieves predictive accuracy competitive with or superior to state-of-the-art autoregressive methods (e.g., THP, DyRep), while accelerating sampling by 3β5Γ and demonstrating enhanced training robustness.
π Abstract
Continuous-time event sequences, in which events occur at irregular intervals, are ubiquitous across a wide range of industrial and scientific domains. The contemporary modeling paradigm is to treat such data as realizations of a temporal point process, and in machine learning it is common to model temporal point processes in an autoregressive fashion using a neural network. While autoregressive models are successful in predicting the time of a single subsequent event, their performance can be unsatisfactory in forecasting longer horizons due to cascading errors. We propose EventFlow, a non-autoregressive generative model for temporal point processes. Our model builds on the flow matching framework in order to directly learn joint distributions over event times, side-stepping the autoregressive process. EventFlow is likelihood-free, easy to implement and sample from, and either matches or surpasses the performance of state-of-the-art models in both unconditional and conditional generation tasks on a set of standard benchmarks