🤖 AI Summary
This paper addresses the safety certification problem for neural network controllers under state/input delays and interval matrix uncertainties. We propose a risk-aware control method based on positive Lur’e systems. Our key contribution is the first use of a positive Lur’e structure to construct linear, delay-independent safety certificates—bypassing traditional integral quadratic constraint (IQC) frameworks that require explicit delay modeling and suffer from semidefinite programming (SDP) scalability bottlenecks. By characterizing neural network nonlinearities via local sector bounds and integrating positivity analysis with IQC verification, we achieve efficient certification of local exponential stability. Experiments demonstrate that our approach accelerates verification by several orders of magnitude compared to SDP-based IQC methods, while successfully certifying safety over significantly larger uncertainty intervals previously infeasible—thereby substantially expanding the scope of scalable, formal safety guarantees.
📝 Abstract
We present a risk-aware safety certification method for autonomous, learning enabled control systems. Focusing on two realistic risks, state/input delays and interval matrix uncertainty, we model the neural network (NN) controller with local sector bounds and exploit positivity structure to derive linear, delay-independent certificates that guarantee local exponential stability across admissible uncertainties. To benchmark performance, we adopt and implement a state-of-the-art IQC NN verification pipeline. On representative cases, our positivity-based tests run orders of magnitude faster than SDP-based IQC while certifying regimes the latter cannot-providing scalable safety guarantees that complement risk-aware control.