🤖 AI Summary
Addressing the challenge of belief tracking for multimodal, rapidly evolving latent states in partially observable systems, this paper proposes a Bayesian filtering framework based on latent embeddings: it maps high-dimensional belief distributions to fixed-length, low-dimensional embedding vectors and designs a differentiable, particle-inspired update mechanism within the embedding space. This approach integrates the computational efficiency of classical filtering with the expressive power of deep generative models. It effectively mitigates particle degeneration, supports end-to-end training, and balances inference speed with modeling accuracy. Experiments across three canonical partially observable environments demonstrate that the method robustly captures multimodal posterior distributions, significantly improving state estimation accuracy and filtering robustness. The framework establishes a novel paradigm for real-time belief maintenance in complex dynamic systems.
📝 Abstract
We present Neural Bayesian Filtering (NBF), an algorithm for maintaining distributions over hidden states, called beliefs, in partially observable systems. NBF is trained to find a good latent representation of the beliefs induced by a task. It maps beliefs to fixed-length embedding vectors, which condition generative models for sampling. During filtering, particle-style updates compute posteriors in this embedding space using incoming observations and the environment's dynamics. NBF combines the computational efficiency of classical filters with the expressiveness of deep generative models - tracking rapidly shifting, multimodal beliefs while mitigating the risk of particle impoverishment. We validate NBF in state estimation tasks in three partially observable environments.