🤖 AI Summary
This paper addresses safe reinforcement learning under epistemic uncertainty, aiming to ensure long-horizon robust safety of policies under cumulative constraints. To this end, it proposes the first framework integrating mirror descent policy optimization into robust constrained Markov decision processes (RCMDPs), jointly optimizing the policy and an adversarial transition kernel to achieve co-robustification of policy and environment model. The method unifies policy gradient updates, Lagrangian relaxation, Bregman divergence analysis, and entropy regularization. It attains convergence rates of $O(1/T)$ in the oracle setting and $ ilde{O}(1/T^{1/3})$ in the sample-based setting. The theoretical analysis is rigorous and establishes formal guarantees on constraint satisfaction and optimality under uncertainty. Empirical evaluation demonstrates that the proposed approach significantly outperforms existing baseline algorithms in terms of robustness against model misspecification and environmental perturbations.
📝 Abstract
Safety is an essential requirement for reinforcement learning systems. The newly emerging framework of robust constrained Markov decision processes allows learning policies that satisfy long-term constraints while providing guarantees under epistemic uncertainty. This paper presents mirror descent policy optimisation for robust constrained Markov decision processes (RCMDPs), making use of policy gradient techniques to optimise both the policy (as a maximiser) and the transition kernel (as an adversarial minimiser) on the Lagrangian representing a constrained MDP. In the oracle-based RCMDP setting, we obtain an $mathcal{O}left(frac{1}{T}
ight)$ convergence rate for the squared distance as a Bregman divergence, and an $mathcal{O}left(e^{-T}
ight)$ convergence rate for entropy-regularised objectives. In the sample-based RCMDP setting, we obtain an $ ilde{mathcal{O}}left(frac{1}{T^{1/3}}
ight)$ convergence rate. Experiments confirm the benefits of mirror descent policy optimisation in constrained and unconstrained optimisation, and significant improvements are observed in robustness tests when compared to baseline policy optimisation algorithms.