🤖 AI Summary
This paper addresses the imbalance between accuracy and computational complexity in evaluating model fit and testing parameter significance for multidimensional marked Hawkes processes. We propose the first robust hypothesis testing framework specifically designed for exponential-kernel marked Hawkes processes. Methodologically, our approach integrates maximum likelihood estimation with asymptotic statistical inference, numerical simulation, and Monte Carlo testing to establish a dual-layer testing framework—simultaneously assessing both parametric and structural hypotheses. Our key contributions are: (i) the first systematic implementation of joint significance testing for kernel parameters and triggering structure under finite-sample settings, empirically demonstrating robustness even where theoretical guarantees are weak; and (ii) substantial reduction in overfitting risk and computational cost, thereby enhancing interpretability and reliability of event sequence modeling.
📝 Abstract
The Hawkes model is a past-dependent point process, widely used in various fields for modeling temporal clustering of events. Extending this framework, the multidimensional marked Hawkes process incorporates multiple interacting event types and additional marks, enhancing its capability to model complex dependencies in multivariate time series data. However, increasing the complexity of the model also increases the computational cost of the associated estimation methods and may induce an overfitting of the model. Therefore, it is essential to find a trade-off between accuracy and artificial complexity of the model. In order to find the appropriate version of Hawkes processes, we address, in this paper, the tasks of model fit evaluation and parameter testing for marked Hawkes processes. This article focuses on parametric Hawkes processes with exponential memory kernels, a popular variant for its theoretical and practical advantages. Our work introduces robust testing methodologies for assessing model parameters and complexity, building upon and extending previous theoretical frameworks. We then validate the practical robustness of these tests through comprehensive numerical studies, especially in scenarios where theoretical guarantees remains incomplete.