🤖 AI Summary
Addressing the challenge of ensuring safety for reinforcement learning (RL)-based navigation policies in dynamic and uncertain environments, this paper proposes a hierarchical safety-aware navigation framework. First, unsafe regions are identified via probabilistic enumeration. Second, neural network verification is integrated with probabilistic sampling—novelly enabling automatic synthesis of generalizable control barrier functions (CBFs) without requiring prior knowledge of the system dynamics. Third, the synthesized CBFs are embedded into a real-time safety layer that corrects arbitrary end-to-end RL policies. The resulting approach provides plug-and-play safety augmentation. Evaluated in both simulation and on a real-world autonomous underwater vehicle platform, it reduces safety violations by 92%, achieves a 98.7% task completion rate, and preserves over 95% of the original policy’s efficiency.
📝 Abstract
Achieving safe autonomous navigation systems is critical for deploying robots in dynamic and uncertain real-world environments. In this paper, we propose a hierarchical control framework leveraging neural network verification techniques to design control barrier functions (CBFs) and policy correction mechanisms that ensure safe reinforcement learning navigation policies. Our approach relies on probabilistic enumeration to identify unsafe regions of operation, which are then used to construct a safe CBF-based control layer applicable to arbitrary policies. We validate our framework both in simulation and on a real robot, using a standard mobile robot benchmark and a highly dynamic aquatic environmental monitoring task. These experiments demonstrate the ability of the proposed solution to correct unsafe actions while preserving efficient navigation behavior. Our results show the promise of developing hierarchical verification-based systems to enable safe and robust navigation behaviors in complex scenarios.