Toward Learning POMDPs Beyond Full-Rank Actions and State Observability

📅 2026-01-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a method for learning the structure and parameters of discrete partially observable Markov decision processes (POMDPs) from action-observation sequences under weak assumptions—specifically, when the system state is unobservable and the action-induced transition matrices are rank-deficient. By integrating predictive state representations (PSRs) with tensor decomposition, the approach jointly estimates the observation and transition matrices via a similarity transformation and partitions states into equivalence classes sharing identical observation distributions, thereby constructing an explicit likelihood model. In contrast to conventional tensor-based methods that require full-rank actions and complete observability, this is the first approach to enable explicit modeling of both transition and observation likelihoods in partially observable settings. The learned model, given sufficient data, can be directly employed by standard POMDP solvers, achieving planning performance comparable to PSR-based methods while supporting behavior customization through explicit likelihoods.

Technology Category

Application Category

📝 Abstract
We are interested in enabling autonomous agents to learn and reason about systems with hidden states, such as locking mechanisms. We cast this problem as learning the parameters of a discrete Partially Observable Markov Decision Process (POMDP). The agent begins with knowledge of the POMDP's actions and observation spaces, but not its state space, transitions, or observation models. These properties must be constructed from a sequence of actions and observations. Spectral approaches to learning models of partially observable domains, such as Predictive State Representations (PSRs), learn representations of state that are sufficient to predict future outcomes. PSR models, however, do not have explicit transition and observation system models that can be used with different reward functions to solve different planning problems. Under a mild set of rankness assumptions on the products of transition and observation matrices, we show how PSRs learn POMDP matrices up to a similarity transform, and this transform may be estimated via tensor decomposition methods. Our method learns observation matrices and transition matrices up to a partition of states, where the states in a single partition have the same observation distributions corresponding to actions whose transition matrices are full-rank. Our experiments suggest that explicit observation and transition likelihoods can be leveraged to generate new plans for different goals and reward functions after the model has been learned. We also show that learning a POMDP beyond a partition of states is impossible from sequential data by constructing two POMDPs that agree on all observation distributions but differ in their transition dynamics.
Problem

Research questions and friction points this paper is trying to address.

POMDP
hidden states
partial observability
transition matrices
observation models
Innovation

Methods, ideas, or system contributions that make the work stand out.

POMDP learning
Predictive State Representations
tensor decomposition
partial observability
state partitioning
🔎 Similar Papers
No similar papers found.