🤖 AI Summary
Existing neural safety synthesis methods for high-dimensional robotic control lack convergence guarantees and interpretability, hindering simultaneous achievement of safety and scalability. This paper proposes an adversarial reinforcement learning framework based on implicit Critic Stackelberg games. Our method introduces the first local convergence guarantee for neural safety synthesis. We design a minimax Actor coupled with an implicit Critic architecture, enabling provably correct computation of minimax equilibria. Furthermore, we integrate neural Lyapunov functions with high-dimensional policy networks to jointly enhance closed-loop stability and representational capacity. Evaluated on OpenAI Gym benchmarks and real-world hardware experiments with a 36-degree-of-freedom quadrupedal robot, our approach achieves significantly improved robustness and safety—outperforming state-of-the-art neural safety synthesis methods across all key metrics.
📝 Abstract
While robust optimal control theory provides a rigorous framework to compute robot control policies that are provably safe, it struggles to scale to high-dimensional problems, leading to increased use of deep learning for tractable synthesis of robot safety. Unfortunately, existing neural safety synthesis methods often lack convergence guarantees and solution interpretability. In this paper, we present Minimax Actors Guided by Implicit Critic Stackelberg (MAGICS), a novel adversarial reinforcement learning (RL) algorithm that guarantees local convergence to a minimax equilibrium solution. We then build on this approach to provide local convergence guarantees for a general deep RL-based robot safety synthesis algorithm. Through both simulation studies on OpenAI Gym environments and hardware experiments with a 36-dimensional quadruped robot, we show that MAGICS can yield robust control policies outperforming the state-of-the-art neural safety synthesis methods.