Efficient Inference for Coupled Hidden Markov Models in Continuous Time and Discrete Space

📅 2025-10-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of posterior inference in high-dimensional continuous-time, discrete-state coupled hidden Markov models under noisy discrete observations—where exact inference is analytically intractable due to Doob’s *h*-transform. We propose an approximate inference framework based on an interacting particle system over latent variables. Our key contribution is a learnable forward-looking function parameterization that explicitly incorporates future observation information; combined with a twisted potential function and sequential Monte Carlo sampling, it yields efficient, low-variance posterior approximations. The method is validated on two complex systems: a graph-structured latent-variable SIRS epidemic model and a real-data-driven neural dynamical model of wildfire propagation. Results demonstrate substantial improvements in both inference accuracy and computational efficiency for high-dimensional continuous-time Markov chains under noisy observations.

Technology Category

Application Category

📝 Abstract
Systems of interacting continuous-time Markov chains are a powerful model class, but inference is typically intractable in high dimensional settings. Auxiliary information, such as noisy observations, is typically only available at discrete times, and incorporating it via a Doob's $h-$transform gives rise to an intractable posterior process that requires approximation. We introduce Latent Interacting Particle Systems, a model class parameterizing the generator of each Markov chain in the system. Our inference method involves estimating look-ahead functions (twist potentials) that anticipate future information, for which we introduce an efficient parameterization. We incorporate this approximation in a twisted Sequential Monte Carlo sampling scheme. We demonstrate the effectiveness of our approach on a challenging posterior inference task for a latent SIRS model on a graph, and on a neural model for wildfire spread dynamics trained on real data.
Problem

Research questions and friction points this paper is trying to address.

Intractable inference in high-dimensional coupled Markov chains
Approximating posterior processes with discrete noisy observations
Estimating twist potentials for efficient sequential Monte Carlo
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parameterizes Markov chain generators in system
Estimates look-ahead functions anticipating future information
Uses twisted Sequential Monte Carlo sampling scheme
🔎 Similar Papers
No similar papers found.