Kernel-Based Learning of Safety Barriers

📅 2026-01-17
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a data-driven framework for safety verification and synthesis tailored to black-box AI systems operating in safety-critical settings with discrete-time stochastic dynamics. By constructing ambiguity sets in a reproducing kernel Hilbert space (RKHS) based on observed system trajectories, the approach leverages conditional mean embeddings to characterize uncertainty without requiring explicit knowledge of the system dynamics or noise distributions. A finite Fourier expansion is employed to transform the resulting semi-infinite optimization problem into a tractable linear program. The framework accommodates general temporal logic specifications and incorporates a distributionally robust mechanism to handle out-of-distribution behaviors. Empirical evaluations on black-box systems—including those with neural network controllers—demonstrate that the method ensures safety while maintaining strong scalability and robustness.

Technology Category

Application Category

📝 Abstract
The rapid integration of AI algorithms in safety-critical applications such as autonomous driving and healthcare is raising significant concerns about the ability to meet stringent safety standards. Traditional tools for formal safety verification struggle with the black-box nature of AI-driven systems and lack the flexibility needed to scale to the complexity of real-world applications. In this paper, we present a data-driven approach for safety verification and synthesis of black-box systems with discrete-time stochastic dynamics. We employ the concept of control barrier certificates, which can guarantee safety of the system, and learn the certificate directly from a set of system trajectories. We use conditional mean embeddings to embed data from the system into a reproducing kernel Hilbert space (RKHS) and construct an RKHS ambiguity set that can be inflated to robustify the result to out-of-distribution behavior. We provide the theoretical results on how to apply the approach to general classes of temporal logic specifications beyond safety. For the data-driven computation of safety barriers, we leverage a finite Fourier expansion to cast a typically intractable semi-infinite optimization problem as a linear program. The resulting spectral barrier allows us to leverage the fast Fourier transform to generate the relaxed problem efficiently, offering a scalable yet distributionally robust framework for verifying safety. Our work moves beyond restrictive assumptions on system dynamics and uncertainty, as demonstrated on two case studies including a black-box system with a neural network controller.
Problem

Research questions and friction points this paper is trying to address.

safety verification
black-box systems
stochastic dynamics
control barrier certificates
distributional robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

kernel methods
control barrier certificates
data-driven verification
distributional robustness
Fourier expansion
🔎 Similar Papers
No similar papers found.