🤖 AI Summary
This paper addresses the challenging problem of state estimation in nonlinear, non-Gaussian state-space models. We propose a novel Bayesian inference method grounded in a variational Lagrangian framework. Our core innovation lies in formulating Bayesian inference as a sequential entropy-constrained optimization problem subject to dynamic constraints, leading to a family of forward–backward algorithms whose variational posterior factorization structure is explicitly derived. The method integrates Gaussian–Markov approximations, generalized statistical linear regression, and Fourier–Hermite moment-matching techniques to enable efficient recursive inference. The resulting algorithms achieve both low computational complexity—linear in the number of time steps—and high estimation accuracy. Empirical evaluation demonstrates substantial improvements in efficiency, robustness, and numerical stability for state estimation under strongly nonlinear and non-Gaussian dynamics.
📝 Abstract
We present a class of algorithms for state estimation in nonlinear, non-Gaussian state-space models. Our approach is based on a variational Lagrangian formulation that casts Bayesian inference as a sequence of entropic trust-region updates subject to dynamic constraints. This framework gives rise to a family of forward-backward algorithms, whose structure is determined by the chosen factorization of the variational posterior. By focusing on Gauss--Markov approximations, we derive recursive schemes with favorable computational complexity. For general nonlinear, non-Gaussian models we close the recursions using generalized statistical linear regression and Fourier--Hermite moment matching.