🤖 AI Summary
This work addresses the challenge of verifying neural control barrier functions (NCBFs) for nonlinear dynamical systems, where learning errors lead to both verification intractability and excessive conservatism. We propose CP-NCBF—the first framework integrating conformal prediction with NCBFs—abandoning restrictive Lipschitz continuity assumptions and instead employing quantile calibration for probabilistic safety certification. CP-NCBF guarantees that the safety set containment probability is rigorously controlled at a user-specified confidence level. Compared to existing approaches, CP-NCBF achieves sample efficiency, scalability, and relaxed safety set constraints. In autonomous driving obstacle avoidance and UAV geofencing tasks, it significantly expands the feasible safe region while reducing conservatism, all while strictly bounding the safety verification error rate within a pre-specified threshold.
📝 Abstract
Control Barrier Functions (CBFs) are a practical approach for designing safety-critical controllers, but constructing them for arbitrary nonlinear dynamical systems remains a challenge. Recent efforts have explored learning-based methods, such as neural CBFs (NCBFs), to address this issue. However, ensuring the validity of NCBFs is difficult due to potential learning errors. In this letter, we propose a novel framework that leverages split-conformal prediction to generate formally verified neural CBFs with probabilistic guarantees based on a user-defined error rate, referred to as CP-NCBF. Unlike existing methods that impose Lipschitz constraints on neural CBF-leading to scalability limitations and overly conservative safe sets--our approach is sample-efficient, scalable, and results in less restrictive safety regions. We validate our framework through case studies on obstacle avoidance in autonomous driving and geo-fencing of aerial vehicles, demonstrating its ability to generate larger and less conservative safe sets compared to conventional techniques.