🤖 AI Summary
This work addresses the theoretical and computational challenges in kernel identification for Hawkes processes arising from nonlinearity and model misspecification by proposing a closed-form least squares estimation framework based on pre-specified causal basis functions. The approach linearly parameterizes the kernel function, thereby circumventing iterative optimization and, for the first time, yielding a closed-form solution for parameter estimation under this formulation. A comprehensive asymptotic theory is established, covering both correctly specified and misspecified models, proving that the estimator exists almost surely and converges consistently to either the true parameter or its pseudo-true counterpart. An explicit central limit theorem is also derived. Numerical experiments corroborate the theoretical guarantees and demonstrate the practical superiority of the proposed method.
📝 Abstract
Driven by the recent surge in neural-inspired modeling, point processes have gained significant traction in systems and control. While the Hawkes process is the standard model for characterizing random event sequences with memory, identifying its unknown kernels is often hindered by nonlinearity. Approaches using prescribed basis kernels have emerged to enable linear parameterization, yet they typically rely on iterative likelihood methods and lack rigorous analysis under model misspecification. This paper justifies a closed-form Least Squares identification framework for Hawkes processes with prescribed kernels. We guarantee estimator existence via the almost-sure positive definiteness of the empirical Gram matrix and prove convergence to the true parameters under correct specification, or to well-defined pseudo-true parameters under misspecification. Furthermore, we derive explicit Central Limit Theorems for both regimes, providing a complete and interpretable asymptotic theory. We demonstrate these theoretical findings through comparative numerical simulations.