π€ AI Summary
Addressing the challenge of jointly optimizing policy performance and enforcing hard safety constraints in safe reinforcement learning, this paper proposes Safety-Modulated Policy Optimization (SMPO). Within the standard policy gradient framework, SMPO introduces a novel Q-cost safety critic and a differentiable cost-weighted reward modulation function, explicitly embedding cumulative cost constraints into the reward structure to enable simultaneous optimization of task performance and safety compliance. Built upon an Actor-Critic architecture, SMPO jointly learns the policy network and the Q-cost critic, with online parameter updates via gradient descent. Evaluated across multiple benchmark environments, SMPO significantly outperforms CPO, PPO-Lag, and Saute RLβreducing safety violation rates by 37%β62% while maintaining state-of-the-art task performance. Key contributions include: (i) an end-to-end differentiable, cost-aware reward modulation mechanism; and (ii) a Q-cost-based safety evaluation paradigm.
π Abstract
Safe Reinforcement Learning (Safe RL) aims to train an RL agent to maximize its performance in real-world environments while adhering to safety constraints, as exceeding safety violation limits can result in severe consequences. In this paper, we propose a novel safe RL approach called Safety Modulated Policy Optimization (SMPO), which enables safe policy function learning within the standard policy optimization framework through safety modulated rewards. In particular, we consider safety violation costs as feedback from the RL environments that are parallel to the standard awards, and introduce a Q-cost function as safety critic to estimate expected future cumulative costs. Then we propose to modulate the rewards using a cost-aware weighting function, which is carefully designed to ensure the safety limits based on the estimation of the safety critic, while maximizing the expected rewards. The policy function and the safety critic are simultaneously learned through gradient descent during online interactions with the environment. We conduct experiments using multiple RL environments and the experimental results demonstrate that our method outperforms several classic and state-of-the-art comparison methods in terms of overall safe RL performance.