🤖 AI Summary
This work addresses the challenges of policy evaluation and optimization in Markov decision processes (MDPs) arising from model uncertainty—specifically, epistemic uncertainty—by modeling transition probabilities as random variables and incorporating law-invariant risk measures to capture ambiguity-averse preferences. The authors develop a risk-sensitive MDP framework that extends the value function and Bellman operator while preserving compatibility with dynamic programming. Key contributions include unifying existing MDP models under epistemic uncertainty, rigorously characterizing the class of risk measures that retain the dynamic programming structure, establishing the existence of optimal stationary policies, and proposing tailored value and policy iteration algorithms. This analysis delineates the precise conditions under which dynamic programming remains applicable in uncertain environments.
📝 Abstract
In this paper, we propose a general theory of ambiguity-averse MDPs, which treats the uncertain transition probabilities as random variables and evaluates a policy via a risk measure applied to its random return. This ambiguity-averse MDP framework unifies several models of MDPs with epistemic uncertainty for specific choices of risk measures. We extend the concepts of value functions and Bellman operators to our setting. Based on these objects, we establish the consequences of dynamic programming principles in this framework (existence of stationary policies, value and policy iteration algorithms), and we completely characterize law-invariant risk measures compatible with dynamic programming. Our work draws connections among several variants of MDP models and fully delineates what is possible under the dynamic programming paradigm and which risk measures require leaving it.