🤖 AI Summary
Ensuring both safety and performance throughout the full reinforcement learning lifecycle—from offline training to online deployment—remains a fundamental challenge. Method: We propose the first provably safe end-to-end framework: (i) constructing a safe initial policy via return-conditioned supervised learning; (ii) cautiously tuning reward parameters using Gaussian process Bayesian optimization; and (iii) establishing a unified theoretical foundation for safety-constrained modeling and probabilistic safety analysis. Contribution/Results: Our approach is the first to provide rigorous, high-confidence theoretical guarantees that the deployed policy satisfies safety constraints at all times while asymptotically converging to optimal return. Evaluated on multiple benchmark tasks, it achieves 100% runtime safety and improves average reward by 23–41% over state-of-the-art safe RL methods—overcoming the long-standing trade-off between lifelong safety and high performance.
📝 Abstract
A longstanding goal in safe reinforcement learning (RL) is a method to ensure the safety of a policy throughout the entire process, from learning to operation. However, existing safe RL paradigms inherently struggle to achieve this objective. We propose a method, called Provably Lifetime Safe RL (PLS), that integrates offline safe RL with safe policy deployment to address this challenge. Our proposed method learns a policy offline using return-conditioned supervised learning and then deploys the resulting policy while cautiously optimizing a limited set of parameters, known as target returns, using Gaussian processes (GPs). Theoretically, we justify the use of GPs by analyzing the mathematical relationship between target and actual returns. We then prove that PLS finds near-optimal target returns while guaranteeing safety with high probability. Empirically, we demonstrate that PLS outperforms baselines both in safety and reward performance, thereby achieving the longstanding goal to obtain high rewards while ensuring the safety of a policy throughout the lifetime from learning to operation.