π€ AI Summary
To address the computational intractability of policy learning in POMDPs arising from continuous belief states, this paper introduces the Superstate MDP framework: an approximation of the original POMDP as a finite MDP whose states are bounded-length observation-action histories. Methodologically, it integrates policy-based linear function approximation with temporal-difference (TD) learning and, for the first time, establishes a finite-time error bound for TD learning under non-Markovian conditions. Theoretically, we prove that the approximation error decays exponentially with history length, and that the optimal value function of the Superstate MDP provides a tighter approximation to the true POMDP optimal value than that of the conventional belief-state MDP. Empirical evaluation demonstrates the methodβs efficiency and scalability in high-dimensional observation settings. This work thus offers a novel, theoretically grounded, and practically viable approach to partially observable reinforcement learning.
π Abstract
The continuous nature of belief states in POMDPs presents significant computational challenges in learning the optimal policy. In this paper, we consider an approach that solves a Partially Observable Reinforcement Learning (PORL) problem by approximating the corresponding POMDP model into a finite-state Markov Decision Process (MDP) (called Superstate MDP). We first derive theoretical guarantees that improve upon prior work that relate the optimal value function of the transformed Superstate MDP to the optimal value function of the original POMDP. Next, we propose a policy-based learning approach with linear function approximation to learn the optimal policy for the Superstate MDP. Consequently, our approach shows that a POMDP can be approximately solved using TD-learning followed by Policy Optimization by treating it as an MDP, where the MDP state corresponds to a finite history. We show that the approximation error decreases exponentially with the length of this history. To the best of our knowledge, our finite-time bounds are the first to explicitly quantify the error introduced when applying standard TD learning to a setting where the true dynamics are not Markovian.