MCMC for State Space models

📅 2025-10-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the low sampling efficiency and slow convergence of Markov chain Monte Carlo (MCMC) methods in Bayesian inference for state-space models. We propose a novel sampling framework that integrates the forward–backward algorithm with particle MCMC. Leveraging the Markov property of latent states and conditional independence of observations, we embed latent variable sampling within a Gibbs update and employ particle filtering to approximate intractable likelihoods. A reparameterization strategy is further introduced to improve chain mixing. The method enables joint, efficient inference of latent variables and model parameters while preserving theoretical correctness. It significantly accelerates convergence and reduces effective sample autocorrelation. Experiments demonstrate robustness and computational scalability on nonlinear and non-Gaussian time-series modeling tasks. Our approach provides a practical, general-purpose solution for Bayesian inference in complex state-space models.

Technology Category

Application Category

📝 Abstract
A state-space model is a time-series model that has an unobserved latent process from which we take noisy measurements over time. The observations are conditionally independent given the latent process and the latent process itself is Markovian. These properties lead to simplifications for the conditional distribution of the latent process given the parameters and the observations. This chapter looks at how we can leverage the properties of state-space models to construct efficient MCMC samplers. We consider a range of Gibbs-sampler schemes, including those which use the forward-backward algorithm to simulate from the full conditional of the latent process given the parameters. For models where the forward-backward algorithm is not applicable we look at particle MCMC algorithms that, given the parameters, use particle filters to approximately simulate from the latent process or estimate the likelihood of the observations. Throughout, we provide intuition and informally discuss theory about the properties of the model that impact the efficiency of the different algorithms and how approaches such as reparameterization can improve mixing.
Problem

Research questions and friction points this paper is trying to address.

Developing efficient MCMC samplers for state-space models
Applying Gibbs-samplers with forward-backward latent process simulation
Using particle MCMC when exact latent simulation is infeasible
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leveraging state-space model properties for MCMC
Using Gibbs-sampler with forward-backward algorithm
Applying particle MCMC when forward-backward inapplicable
🔎 Similar Papers
No similar papers found.