🤖 AI Summary
To address suboptimal policy convergence caused by overly conservative exploration in risk-averse constrained reinforcement learning (RaCRL), this paper proposes the Optimistic Risk-Averse Actor-Critic (ORAC) framework. ORAC enables uncertainty-driven optimistic exploration by jointly constructing upper confidence bounds (UCBs) for the reward function and lower confidence bounds (LCBs) for the cost function. It further introduces a dynamic cost-weighting mechanism that actively explores high-reward regions while provably satisfying safety constraints. Experiments on complex benchmarks—including Safety-Gymnasium and CityLearn—demonstrate that ORAC significantly improves the reward–cost trade-off, avoids suboptimal convergence, and maintains high constraint satisfaction rates. The core contribution lies in systematically integrating optimism-in-the-face-of-uncertainty into the risk-averse optimization objective—marking the first approach to jointly enhance both exploration efficiency and policy performance under safety constraints.
📝 Abstract
Risk-averse Constrained Reinforcement Learning (RaCRL) aims to learn policies that minimise the likelihood of rare and catastrophic constraint violations caused by an environment's inherent randomness. In general, risk-aversion leads to conservative exploration of the environment which typically results in converging to sub-optimal policies that fail to adequately maximise reward or, in some cases, fail to achieve the goal. In this paper, we propose an exploration-based approach for RaCRL called Optimistic Risk-averse Actor Critic (ORAC), which constructs an exploratory policy by maximising a local upper confidence bound of the state-action reward value function whilst minimising a local lower confidence bound of the risk-averse state-action cost value function. Specifically, at each step, the weighting assigned to the cost value is increased or decreased if it exceeds or falls below the safety constraint value. This way the policy is encouraged to explore uncertain regions of the environment to discover high reward states whilst still satisfying the safety constraints. Our experimental results demonstrate that the ORAC approach prevents convergence to sub-optimal policies and improves significantly the reward-cost trade-off in various continuous control tasks such as Safety-Gymnasium and a complex building energy management environment CityLearn.