Scalable Policy-Based RL Algorithms for POMDPs

πŸ“… 2025-10-07
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the computational intractability of policy learning in POMDPs arising from continuous belief states, this paper introduces the Superstate MDP framework: an approximation of the original POMDP as a finite MDP whose states are bounded-length observation-action histories. Methodologically, it integrates policy-based linear function approximation with temporal-difference (TD) learning and, for the first time, establishes a finite-time error bound for TD learning under non-Markovian conditions. Theoretically, we prove that the approximation error decays exponentially with history length, and that the optimal value function of the Superstate MDP provides a tighter approximation to the true POMDP optimal value than that of the conventional belief-state MDP. Empirical evaluation demonstrates the method’s efficiency and scalability in high-dimensional observation settings. This work thus offers a novel, theoretically grounded, and practically viable approach to partially observable reinforcement learning.

Technology Category

Application Category

πŸ“ Abstract
The continuous nature of belief states in POMDPs presents significant computational challenges in learning the optimal policy. In this paper, we consider an approach that solves a Partially Observable Reinforcement Learning (PORL) problem by approximating the corresponding POMDP model into a finite-state Markov Decision Process (MDP) (called Superstate MDP). We first derive theoretical guarantees that improve upon prior work that relate the optimal value function of the transformed Superstate MDP to the optimal value function of the original POMDP. Next, we propose a policy-based learning approach with linear function approximation to learn the optimal policy for the Superstate MDP. Consequently, our approach shows that a POMDP can be approximately solved using TD-learning followed by Policy Optimization by treating it as an MDP, where the MDP state corresponds to a finite history. We show that the approximation error decreases exponentially with the length of this history. To the best of our knowledge, our finite-time bounds are the first to explicitly quantify the error introduced when applying standard TD learning to a setting where the true dynamics are not Markovian.
Problem

Research questions and friction points this paper is trying to address.

Solving POMDPs by approximating them as finite-state MDPs
Providing theoretical guarantees for optimal value function approximation
Quantifying error reduction in non-Markovian TD learning applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Approximates POMDP into finite-state MDP
Uses policy-based learning with linear approximation
Applies TD-learning and policy optimization techniques
πŸ”Ž Similar Papers
No similar papers found.