Risk-Averse Total-Reward Reinforcement Learning

📅 2025-06-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses risk-averse policy optimization in infinite-horizon, undiscounted total-reward Markov decision processes (MDPs), overcoming a key limitation of existing model-based entropy-risk measures (ERM/EVaR): their reliance on full transition dynamics and poor scalability to large-scale settings. We propose the first model-free, dynamically consistent, and tractable Q-learning algorithm for total-reward ERM/EVaR optimization. Our approach is the first to rigorously integrate dynamic consistency and computational tractability into the total-reward ERM/EVaR framework. Leveraging stochastic approximation theory, we establish theoretical guarantees on convergence and optimality. Crucially, the algorithm requires no knowledge of transition probabilities—only sample trajectories—and achieves rapid, stable convergence to the optimal risk-averse value function in tabular settings. Empirical results demonstrate a significant reduction in the probability of selecting high-risk policies.

Technology Category

Application Category

📝 Abstract
Risk-averse total-reward Markov Decision Processes (MDPs) offer a promising framework for modeling and solving undiscounted infinite-horizon objectives. Existing model-based algorithms for risk measures like the entropic risk measure (ERM) and entropic value-at-risk (EVaR) are effective in small problems, but require full access to transition probabilities. We propose a Q-learning algorithm to compute the optimal stationary policy for total-reward ERM and EVaR objectives with strong convergence and performance guarantees. The algorithm and its optimality are made possible by ERM's dynamic consistency and elicitability. Our numerical results on tabular domains demonstrate quick and reliable convergence of the proposed Q-learning algorithm to the optimal risk-averse value function.
Problem

Research questions and friction points this paper is trying to address.

Develop Q-learning for risk-averse total-reward MDPs
Overcome transition probability dependency in existing methods
Ensure convergence for ERM and EVaR objectives
Innovation

Methods, ideas, or system contributions that make the work stand out.

Q-learning for risk-averse total-reward MDPs
Dynamic consistency enables optimal policies
Convergence guarantees for ERM and EVaR
🔎 Similar Papers
No similar papers found.
Xihong Su
Xihong Su
Computer Science, University of New Hampshire
Reinforcement Learning
J
Jia Lin Hau
Department of Computer Science, University of New Hampshire, Durham, NH 03824
G
Gersi Doko
Department of Computer Science, University of New Hampshire, Durham, NH 03824
Kishan Panaganti
Kishan Panaganti
Tencent AI Lab
Large Reasoning ModelsReinforcement LearningRobust OptimizationStatistical Learning
Marek Petrik
Marek Petrik
University of New Hampshire
Machine Learning