🤖 AI Summary
This work addresses the challenge of safe and efficient exploration in online reinforcement learning within real-world environments. To this end, the authors propose SOOPER, a novel method that, for the first time, guarantees convergence to an optimal policy under strict safety constraints throughout the entire learning process. SOOPER integrates a probabilistic dynamics model, a conservative yet suboptimal policy prior—derived from offline data or simulators—and a hybrid optimistic-pessimistic exploration mechanism that reverts to the safe policy in high-uncertainty regions to ensure safety during exploration. Theoretical analysis provides a formal upper bound on cumulative regret, while experiments across multiple safe reinforcement learning benchmarks and real hardware platforms demonstrate SOOPER’s scalability, superior empirical performance, and alignment with its theoretical guarantees.
📝 Abstract
Safe exploration is a key requirement for reinforcement learning (RL) agents to learn and adapt online, beyond controlled (e.g. simulated) environments. In this work, we tackle this challenge by utilizing suboptimal yet conservative policies (e.g., obtained from offline data or simulators) as priors. Our approach, SOOPER, uses probabilistic dynamics models to optimistically explore, yet pessimistically fall back to the conservative policy prior if needed. We prove that SOOPER guarantees safety throughout learning, and establish convergence to an optimal policy by bounding its cumulative regret. Extensive experiments on key safe RL benchmarks and real-world hardware demonstrate that SOOPER is scalable, outperforms the state-of-the-art and validate our theoretical guarantees in practice.