A Reductions Approach to Risk-Sensitive Reinforcement Learning with Optimized Certainty Equivalents

📅 2024-03-10
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
This paper studies risk-sensitive reinforcement learning (RL), targeting optimization of risk measures in the optimized certainty equivalent (OCE) class—such as conditional value-at-risk (CVaR), entropy risk, and mean–variance—for cumulative rewards. Methodologically, it introduces the first general reduction framework for OCE risk optimization, transforming the problem into a risk-neutral RL task via an augmented Markov decision process (MDP). Theoretically, it establishes the first finite-sample CVaR regret bounds for non-Markovian, rich-observation MDPs and block MDPs. Algorithmically, it proposes two meta-algorithms—one based on optimism and the other on policy gradients—that ensure monotonic improvement and global convergence for risk-sensitive policy optimization. Empirically, on a conceptual MDP, the learned history-dependent policies strictly dominate all Markovian policies; moreover, the theoretical guarantees extend to block MDPs, and experiments validate both the efficacy and robustness of the proposed methods.

Technology Category

Application Category

📝 Abstract
We study risk-sensitive RL where the goal is learn a history-dependent policy that optimizes some risk measure of cumulative rewards. We consider a family of risks called the optimized certainty equivalents (OCE), which captures important risk measures such as conditional value-at-risk (CVaR), entropic risk and Markowitz's mean-variance. In this setting, we propose two meta-algorithms: one grounded in optimism and another based on policy gradients, both of which can leverage the broad suite of risk-neutral RL algorithms in an augmented Markov Decision Process (MDP). Via a reductions approach, we leverage theory for risk-neutral RL to establish novel OCE bounds in complex, rich-observation MDPs. For the optimism-based algorithm, we prove bounds that generalize prior results in CVaR RL and that provide the first risk-sensitive bounds for exogenous block MDPs. For the gradient-based algorithm, we establish both monotone improvement and global convergence guarantees under a discrete reward assumption. Finally, we empirically show that our algorithms learn the optimal history-dependent policy in a proof-of-concept MDP, where all Markovian policies provably fail.
Problem

Research questions and friction points this paper is trying to address.

Optimize risk-sensitive RL using optimized certainty equivalents (OCE).
Develop meta-algorithms for risk-sensitive RL in augmented MDPs.
Establish novel OCE bounds in complex, rich-observation MDPs.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimized Certainty Equivalents for risk-sensitive RL
Meta-algorithms: optimism and policy gradients
Reductions approach leveraging risk-neutral RL theory
🔎 Similar Papers
No similar papers found.