🤖 AI Summary
This work addresses policy synthesis for multi-environment Markov decision processes (MEMDPs) under partially observable settings where the environment is hidden but the state is fully observable, with a focus on parity objectives. It establishes, for the first time, an equivalence between adversarial (sure) and stochastic (prior) environment selection mechanisms under parity goals: the sure value equals the infimum of prior values over all possible beliefs. Leveraging belief-state dynamic programming, parity game analysis, and entropy invariance properties, the paper proposes a space-efficient approximation algorithm that achieves arbitrary precision in computing the prior-MEMDP value. Furthermore, it reduces the complexity of the sure semantics gap problem from double-exponential space to PSPACE when probabilities are encoded in unary, and identifies prior-MEMDPs as a tractable subclass of POMDPs characterized by non-increasing belief entropy.
📝 Abstract
Multiple-environment Markov decision processes (MEMDPs) equip an MDP with several probabilistic transition functions (one per possible environment) so that the state is observable but the environment is not. Previous work studies two semantics: (i) the universal semantics, where an adversary picks the environment; and (ii) the prior semantics, where the environment is drawn once before execution from a fixed distribution. We clarify the relation between these semantics. For parity objectives, we show that the qualitative questions, i.e. value one, coincide, and we develop a new algorithm for the general value of MEMDP with prior semantics. In particular, we show that the prior value of an MEMDP with a parity objective can be approximated to any precision with a space efficient algorithm; equivalently, the associated gap problem is decidable in PSPACE when probabilities are given in unary (and in EXPSPACE otherwise). We then prove that the universal value equals the infimum of prior values over all beliefs. This yields a new algorithm for the universal gap problem with the same complexity (PSPACE for unary probabilities, EXPSPACE in general), improving on earlier doubly-exponential-space procedures. Finally, we observe that MEMDPs under the prior semantics form an important tractable subclass of POMDPs: our algorithms exploit the fact that belief entropy never increases, and we establish that any POMDP with this property reduces effectively to a prior-MEMDP, showing that prior-MEMDPs capture a broad and practically relevant subclass of POMDPs.