BarrierBench : Evaluating Large Language Models for Safety Verification in Dynamical Systems

๐Ÿ“… 2025-11-12
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Traditional safety verification methods for dynamical systems suffer from poor scalability, heavy reliance on manually crafted templates and expert knowledge, and labor-intensive certificate synthesis. Method: This paper proposes the first large language model (LLM)-based agent framework that formalizes expert linguistic reasoning as automated barrier certificate synthesis. The approach integrates natural language reasoning, retrieval-augmented generation (RAG), SMT solving, and formal verification to jointly support template discovery, certificate optimization, and co-design of controllers and certificates. Contribution/Results: Evaluated on a benchmark of 100 dynamical systems, the framework achieves over 90% success rate in generating valid barrier certificates. We open-source the complete toolchain and dataset, establishing the first systematic, language-model-driven safety verification loopโ€”bridging AI-based reasoning and formal methods in a principled, end-to-end manner.

Technology Category

Application Category

๐Ÿ“ Abstract
Safety verification of dynamical systems via barrier certificates is essential for ensuring correctness in autonomous applications. Synthesizing these certificates involves discovering mathematical functions with current methods suffering from poor scalability, dependence on carefully designed templates, and exhaustive or incremental function-space searches. They also demand substantial manual expertise--selecting templates, solvers, and hyperparameters, and designing sampling strategies--requiring both theoretical and practical knowledge traditionally shared through linguistic reasoning rather than formalized methods. This motivates a key question: can such expert reasoning be captured and operationalized by language models? We address this by introducing an LLM-based agentic framework for barrier certificate synthesis. The framework uses natural language reasoning to propose, refine, and validate candidate certificates, integrating LLM-driven template discovery with SMT-based verification, and supporting barrier-controller co-synthesis to ensure consistency between safety certificates and controllers. To evaluate this capability, we introduce BarrierBench, a benchmark of 100 dynamical systems spanning linear, nonlinear, discrete-time, and continuous-time settings. Our experiments assess not only the effectiveness of LLM-guided barrier synthesis but also the utility of retrieval-augmented generation and agentic coordination strategies in improving its reliability and performance. Across these tasks, the framework achieves more than 90% success in generating valid certificates. By releasing BarrierBench and the accompanying toolchain, we aim to establish a community testbed for advancing the integration of language-based reasoning with formal verification in dynamical systems. The benchmark is publicly available at https://hycodev.com/dataset/barrierbench
Problem

Research questions and friction points this paper is trying to address.

Automating safety verification through barrier certificates for dynamical systems
Overcoming limitations of manual template selection and function-space searches
Evaluating if language models can capture expert reasoning for certificate synthesis
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based agentic framework for barrier certificate synthesis
Integrates natural language reasoning with SMT verification
Supports barrier-controller co-synthesis for safety consistency
๐Ÿ”Ž Similar Papers
No similar papers found.