Learning Optimal Filters Using Variational Inference

📅 2024-06-26
🏛️ arXiv.org
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of accurately estimating the filtering distribution (i.e., the state posterior) in high-dimensional nonlinear dynamical systems. To overcome the bias inherent in traditional ensemble Kalman filters (EnKF) under strong nonlinearity and their reliance on labor-intensive manual tuning, we propose an end-to-end learning framework grounded in variational inference. Specifically, the filter’s analysis step is modeled as a learnable, parameterized analysis mapping; key components—including gain computation, covariance inflation, and localization—are jointly optimized via a variational objective. This constitutes the first systematic integration of variational inference into filter design, enabling unified modeling and automatic calibration of the analysis process. Experiments across diverse linear and nonlinear systems demonstrate that our method significantly reduces filtering bias, improves posterior estimation accuracy, and drastically diminishes dependence on manual parameter tuning.

Technology Category

Application Category

📝 Abstract
Filtering - the task of estimating the conditional distribution for states of a dynamical system given partial and noisy observations - is important in many areas of science and engineering, including weather and climate prediction. However, the filtering distribution is generally intractable to obtain for high-dimensional, nonlinear systems. Filters used in practice, such as the ensemble Kalman filter (EnKF), provide biased probabilistic estimates for nonlinear systems and have numerous tuning parameters. Here, we present a framework for learning a parameterized analysis map - the transformation that takes samples from a forecast distribution, and combines with an observation, to update the approximate filtering distribution - using variational inference. In principle this can lead to a better approximation of the filtering distribution, and hence smaller bias. We show that this methodology can be used to learn the gain matrix, in an affine analysis map, for filtering linear and nonlinear dynamical systems; we also study the learning of inflation and localization parameters for an EnKF. The framework developed here can also be used to learn new filtering algorithms with more general forms for the analysis map.
Problem

Research questions and friction points this paper is trying to address.

Estimating states of dynamical systems with noisy observations
Reducing bias in nonlinear filtering distributions
Learning optimal filter parameters using variational inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses variational inference for filter learning
Learns affine analysis map parameters
Optimizes EnKF inflation and localization
E
Enoch Luk
Department of Computing and Mathematical Sciences, California Institute of Technology, Pasadena, California, USA
E
Eviatar Bach
Department of Computing and Mathematical Sciences, California Institute of Technology, Pasadena, California, USA; Department of Environmental Science and Engineering, California Institute of Technology, Pasadena, California, USA; Department of Meteorology and Department of Mathematics and Statistics, University of Reading, Reading, UK
Ricardo Baptista
Ricardo Baptista
University of Toronto
uncertainty quantificationinverse problemsdata assimilationcomputational statistics
A
Andrew M. Stuart
Department of Computing and Mathematical Sciences, California Institute of Technology, Pasadena, California, USA