Multi-Environment POMDPs: Discrete Model Uncertainty Under Partial Observability

📅 2025-10-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses robust decision-making in partially observable Markov decision processes (POMDPs) under discrete model uncertainty: specifically, finding a single policy that is optimal across multiple POMDP models sharing identical state, action, and observation spaces but differing in transition, observation, or reward functions. To this end, we propose the adversarial-belief POMDP framework, which reformulates multi-model robust optimization as a single POMDP wherein an adversary selects the worst-case model to minimize expected reward. We extend standard POMDP solution techniques—integrating exact dynamic programming with point-based approximation methods—to support adversarial initialization of the belief over models. Empirical evaluation on canonical POMDP benchmarks demonstrates the method’s effectiveness, robustness, and scalability. The approach provides both theoretical foundations and practical tools for real-world scenarios involving conflicting expert models or epistemic uncertainty.

Technology Category

Application Category

📝 Abstract
Multi-environment POMDPs (ME-POMDPs) extend standard POMDPs with discrete model uncertainty. ME-POMDPs represent a finite set of POMDPs that share the same state, action, and observation spaces, but may arbitrarily vary in their transition, observation, and reward models. Such models arise, for instance, when multiple domain experts disagree on how to model a problem. The goal is to find a single policy that is robust against any choice of POMDP within the set, i.e., a policy that maximizes the worst-case reward across all POMDPs. We generalize and expand on existing work in the following way. First, we show that ME-POMDPs can be generalized to POMDPs with sets of initial beliefs, which we call adversarial-belief POMDPs (AB-POMDPs). Second, we show that any arbitrary ME-POMDP can be reduced to a ME-POMDP that only varies in its transition and reward functions or only in its observation and reward functions, while preserving (optimal) policies. We then devise exact and approximate (point-based) algorithms to compute robust policies for AB-POMDPs, and thus ME-POMDPs. We demonstrate that we can compute policies for standard POMDP benchmarks extended to the multi-environment setting.
Problem

Research questions and friction points this paper is trying to address.

Extending POMDPs with discrete model uncertainty
Finding robust policies across varying transition and reward models
Generalizing multi-environment POMDPs to adversarial-belief settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extends POMDPs with discrete model uncertainty
Reduces ME-POMDPs to simplified transition-observation variations
Devises exact and approximate point-based algorithms
🔎 Similar Papers
No similar papers found.