🤖 AI Summary
To address insufficient uncertainty modeling in state representations for partially observable reinforcement learning (RL), this paper introduces the Differentiable Kalman Filter (DKF) layer—a plug-and-play state-space module. The DKF layer explicitly models latent states as Gaussian distributions, enabling closed-form probabilistic inference, and achieves efficient differentiability via a parallel scan algorithm, allowing seamless integration into model-agnostic, end-to-end RL frameworks. Unlike conventional RNNs or Transformers—which lack explicit probabilistic filtering mechanisms—the DKF layer is the first to reformulate analytical Kalman filtering as a learnable, parallelizable state representation unit. Experiments across diverse partially observable benchmarks demonstrate that DKF significantly outperforms LSTM, GRU, and Transformer baselines; notably, on tasks demanding deep uncertainty reasoning, it yields 17–32% higher cumulative returns.
📝 Abstract
Optimal decision-making under partial observability requires reasoning about the uncertainty of the environment's hidden state. However, most reinforcement learning architectures handle partial observability with sequence models that have no internal mechanism to incorporate uncertainty in their hidden state representation, such as recurrent neural networks, deterministic state-space models and transformers. Inspired by advances in probabilistic world models for reinforcement learning, we propose a standalone Kalman filter layer that performs closed-form Gaussian inference in linear state-space models and train it end-to-end within a model-free architecture to maximize returns. Similar to efficient linear recurrent layers, the Kalman filter layer processes sequential data using a parallel scan, which scales logarithmically with the sequence length. By design, Kalman filter layers are a drop-in replacement for other recurrent layers in standard model-free architectures, but importantly they include an explicit mechanism for probabilistic filtering of the latent state representation. Experiments in a wide variety of tasks with partial observability show that Kalman filter layers excel in problems where uncertainty reasoning is key for decision-making, outperforming other stateful models.