Learn With Imagination: Safe Set Guided State-wise Constrained Policy Optimization

📅 2023-08-25
🏛️ arXiv.org
📈 Citations: 6
Influential: 1
📄 PDF
🤖 AI Summary
Deep reinforcement learning (DRL) achieves strong performance in control tasks, yet its exploratory training process frequently violates safety constraints, hindering real-world deployment; meanwhile, conventional safety-critical control methods rely on precise system dynamics models, which are often unavailable in practical scenarios. Method: We propose the first state-level safety policy optimization framework that requires no prior knowledge of system dynamics and guarantees zero safety violations throughout training. Our approach integrates a black-box safety monitor, a safety-set-guided exploration mechanism, and an imagined cost constraint into a differentiable policy gradient objective. Results: Experiments on high-dimensional robotic control tasks demonstrate strict adherence to state-level safety constraints, significantly outperforming existing safe DRL baselines while preserving efficient policy learning and robust safety assurance.
📝 Abstract
Deep reinforcement learning (RL) excels in various control tasks, yet the absence of safety guarantees hampers its real-world applicability. In particular, explorations during learning usually results in safety violations, while the RL agent learns from those mistakes. On the other hand, safe control techniques ensure persistent safety satisfaction but demand strong priors on system dynamics, which is usually hard to obtain in practice. To address these problems, we present Safe Set Guided State-wise Constrained Policy Optimization (S-3PO), a pioneering algorithm generating state-wise safe optimal policies with zero training violations, i.e., learning without mistakes. S-3PO first employs a safety-oriented monitor with black-box dynamics to ensure safe exploration. It then enforces an"imaginary"cost for the RL agent to converge to optimal behaviors within safety constraints. S-3PO outperforms existing methods in high-dimensional robotics tasks, managing state-wise constraints with zero training violation. This innovation marks a significant stride towards real-world safe RL deployment.
Problem

Research questions and friction points this paper is trying to address.

Ensures safe exploration in deep reinforcement learning
Generates state-wise safe optimal policies
Achieves zero training violations during learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Safe Set Guided State-wise Constrained Policy Optimization
Safety-oriented monitor with black-box dynamics
Imaginary cost enforcement for safe convergence
🔎 Similar Papers
No similar papers found.