🤖 AI Summary
This paper addresses risk-averse policy optimization in infinite-horizon, undiscounted total-reward Markov decision processes (MDPs), overcoming a key limitation of existing model-based entropy-risk measures (ERM/EVaR): their reliance on full transition dynamics and poor scalability to large-scale settings. We propose the first model-free, dynamically consistent, and tractable Q-learning algorithm for total-reward ERM/EVaR optimization. Our approach is the first to rigorously integrate dynamic consistency and computational tractability into the total-reward ERM/EVaR framework. Leveraging stochastic approximation theory, we establish theoretical guarantees on convergence and optimality. Crucially, the algorithm requires no knowledge of transition probabilities—only sample trajectories—and achieves rapid, stable convergence to the optimal risk-averse value function in tabular settings. Empirical results demonstrate a significant reduction in the probability of selecting high-risk policies.
📝 Abstract
Risk-averse total-reward Markov Decision Processes (MDPs) offer a promising framework for modeling and solving undiscounted infinite-horizon objectives. Existing model-based algorithms for risk measures like the entropic risk measure (ERM) and entropic value-at-risk (EVaR) are effective in small problems, but require full access to transition probabilities. We propose a Q-learning algorithm to compute the optimal stationary policy for total-reward ERM and EVaR objectives with strong convergence and performance guarantees. The algorithm and its optimality are made possible by ERM's dynamic consistency and elicitability. Our numerical results on tabular domains demonstrate quick and reliable convergence of the proposed Q-learning algorithm to the optimal risk-averse value function.