Scalable Verification of Neural Control Barrier Functions Using Linear Bound Propagation

📅 2025-11-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Neural control barrier functions (Neural CBFs) face scalability bottlenecks in safety verification for nonlinear control systems, as existing methods suffer from high computational complexity and fail to scale to large networks. Method: This paper proposes an efficient verification framework based on linear bound propagation (LBP), integrating McCormick relaxations with gradient-bound estimation to analytically derive tight upper and lower bounds on the CBF safety condition. It further introduces a parallelizable, adaptive refinement strategy to substantially reduce conservatism. The framework supports common activation functions (e.g., ReLU, tanh) and control-affine systems. Results: Experiments demonstrate successful safety verification of Neural CBFs with up to ∼1,000 neurons across multiple nonlinear dynamical systems. The method achieves 10–100× speedup over prior approaches and significantly outperforms state-of-the-art methods in certified verification success rate.

Technology Category

Application Category

📝 Abstract
Control barrier functions (CBFs) are a popular tool for safety certification of nonlinear dynamical control systems. Recently, CBFs represented as neural networks have shown great promise due to their expressiveness and applicability to a broad class of dynamics and safety constraints. However, verifying that a trained neural network is indeed a valid CBF is a computational bottleneck that limits the size of the networks that can be used. To overcome this limitation, we present a novel framework for verifying neural CBFs based on piecewise linear upper and lower bounds on the conditions required for a neural network to be a CBF. Our approach is rooted in linear bound propagation (LBP) for neural networks, which we extend to compute bounds on the gradients of the network. Combined with McCormick relaxation, we derive linear upper and lower bounds on the CBF conditions, thereby eliminating the need for computationally expensive verification procedures. Our approach applies to arbitrary control-affine systems and a broad range of nonlinear activation functions. To reduce conservatism, we develop a parallelizable refinement strategy that adaptively refines the regions over which these bounds are computed. Our approach scales to larger neural networks than state-of-the-art verification procedures for CBFs, as demonstrated by our numerical experiments.
Problem

Research questions and friction points this paper is trying to address.

Verifying neural control barrier functions efficiently for nonlinear systems
Overcoming computational bottlenecks in neural network safety certification
Scaling verification to larger networks using linear bound propagation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses linear bound propagation for neural networks
Extends LBP to compute neural network gradient bounds
Employs adaptive refinement strategy to reduce conservatism