🤖 AI Summary
This work addresses the challenge of posterior inference in high-dimensional continuous-time, discrete-state coupled hidden Markov models under noisy discrete observations—where exact inference is analytically intractable due to Doob’s *h*-transform. We propose an approximate inference framework based on an interacting particle system over latent variables. Our key contribution is a learnable forward-looking function parameterization that explicitly incorporates future observation information; combined with a twisted potential function and sequential Monte Carlo sampling, it yields efficient, low-variance posterior approximations. The method is validated on two complex systems: a graph-structured latent-variable SIRS epidemic model and a real-data-driven neural dynamical model of wildfire propagation. Results demonstrate substantial improvements in both inference accuracy and computational efficiency for high-dimensional continuous-time Markov chains under noisy observations.
📝 Abstract
Systems of interacting continuous-time Markov chains are a powerful model class, but inference is typically intractable in high dimensional settings. Auxiliary information, such as noisy observations, is typically only available at discrete times, and incorporating it via a Doob's $h-$transform gives rise to an intractable posterior process that requires approximation. We introduce Latent Interacting Particle Systems, a model class parameterizing the generator of each Markov chain in the system. Our inference method involves estimating look-ahead functions (twist potentials) that anticipate future information, for which we introduce an efficient parameterization. We incorporate this approximation in a twisted Sequential Monte Carlo sampling scheme. We demonstrate the effectiveness of our approach on a challenging posterior inference task for a latent SIRS model on a graph, and on a neural model for wildfire spread dynamics trained on real data.