Uncertainty Representations in State-Space Layers for Deep Reinforcement Learning under Partial Observability

📅 2024-09-25
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address insufficient uncertainty modeling in state representations for partially observable reinforcement learning (RL), this paper introduces the Differentiable Kalman Filter (DKF) layer—a plug-and-play state-space module. The DKF layer explicitly models latent states as Gaussian distributions, enabling closed-form probabilistic inference, and achieves efficient differentiability via a parallel scan algorithm, allowing seamless integration into model-agnostic, end-to-end RL frameworks. Unlike conventional RNNs or Transformers—which lack explicit probabilistic filtering mechanisms—the DKF layer is the first to reformulate analytical Kalman filtering as a learnable, parallelizable state representation unit. Experiments across diverse partially observable benchmarks demonstrate that DKF significantly outperforms LSTM, GRU, and Transformer baselines; notably, on tasks demanding deep uncertainty reasoning, it yields 17–32% higher cumulative returns.

Technology Category

Application Category

📝 Abstract
Optimal decision-making under partial observability requires reasoning about the uncertainty of the environment's hidden state. However, most reinforcement learning architectures handle partial observability with sequence models that have no internal mechanism to incorporate uncertainty in their hidden state representation, such as recurrent neural networks, deterministic state-space models and transformers. Inspired by advances in probabilistic world models for reinforcement learning, we propose a standalone Kalman filter layer that performs closed-form Gaussian inference in linear state-space models and train it end-to-end within a model-free architecture to maximize returns. Similar to efficient linear recurrent layers, the Kalman filter layer processes sequential data using a parallel scan, which scales logarithmically with the sequence length. By design, Kalman filter layers are a drop-in replacement for other recurrent layers in standard model-free architectures, but importantly they include an explicit mechanism for probabilistic filtering of the latent state representation. Experiments in a wide variety of tasks with partial observability show that Kalman filter layers excel in problems where uncertainty reasoning is key for decision-making, outperforming other stateful models.
Problem

Research questions and friction points this paper is trying to address.

Handling partial observability in reinforcement learning
Incorporating uncertainty in hidden state representation
Enhancing decision-making with probabilistic filtering
Innovation

Methods, ideas, or system contributions that make the work stand out.

Kalman filter layer integration
Probabilistic latent state filtering
Parallel scan for efficiency
🔎 Similar Papers
C
Carlos E. Luis
Bosch Corporate Research, Intelligent Autonomous Systems Group, Technical University Darmstadt
A
A. Bottero
Bosch Corporate Research, Intelligent Autonomous Systems Group, Technical University Darmstadt
Julia Vinogradska
Julia Vinogradska
Bosch Corporate Research
Felix Berkenkamp
Felix Berkenkamp
Aleph Alpha Research
Generative AIReinforcement LearningRobotics
J
Jan Peters
Intelligent Autonomous Systems Group, Technical University Darmstadt, German Research Center for AI (DFKI), Hessian.AI, Centre for Cognitive Science