Vehicle: Bridging the Embedding Gap in the Verification of Neuro-Symbolic Programs

📅 2024-01-12
🏛️ arXiv.org
📈 Citations: 10
Influential: 0
📄 PDF
🤖 AI Summary
Neural-symbolic program verification suffers from a semantic gap—termed the “embedding gap”—between neural components and symbolic logic. Method: This paper formally defines the embedding gap and proposes an end-to-end formal verification framework comprising: (1) a domain-specific language (DSL) to declaratively specify problem-space properties; (2) a multi-backend compiler that enables declarative, compilable mapping from the problem space to the embedding space, seamlessly interfacing PyTorch, Marabou, and Lean; and (3) modular, co-verification across training environments, neural verifiers, and theorem provers. Contribution/Results: We demonstrate fully automated, reproducible, and mathematically rigorous safety verification on a simplified autonomous driving system equipped with a neural controller. Our approach systematically bridges the semantic divide between neural and symbolic verification, enabling principled integration of learning-based and logic-based reasoning within a unified formal framework.

Technology Category

Application Category

📝 Abstract
Neuro-symbolic programs -- programs containing both machine learning components and traditional symbolic code -- are becoming increasingly widespread. However, we believe that there is still a lack of a general methodology for verifying these programs whose correctness depends on the behaviour of the machine learning components. In this paper, we identify the ``embedding gap'' -- the lack of techniques for linking semantically-meaningful ``problem-space'' properties to equivalent ``embedding-space'' properties -- as one of the key issues, and describe Vehicle, a tool designed to facilitate the end-to-end verification of neural-symbolic programs in a modular fashion. Vehicle provides a convenient language for specifying ``problem-space'' properties of neural networks and declaring their relationship to the ``embedding-space", and a powerful compiler that automates interpretation of these properties in the language of a chosen machine-learning training environment, neural network verifier, and interactive theorem prover. We demonstrate Vehicle's utility by using it to formally verify the safety of a simple autonomous car equipped with a neural network controller.
Problem

Research questions and friction points this paper is trying to address.

Verify neuro-symbolic programs with neural and symbolic components
Address the embedding gap in combining neural and symbolic proofs
Develop a verification language for safe specification and compilation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decomposes neuro-symbolic verification into parts
Introduces Vehicle as intermediate verification language
Compiles neural specs to multiple interfaces safely
🔎 Similar Papers
No similar papers found.
M
M. Daggitt
Heriot-Watt University, Edinburgh, UK; University of Western Australia, Perth, Australia
W
Wen Kokke
University of Strathclyde, Glasgow, UK
R
R. Atkey
University of Strathclyde, Glasgow, UK
N
Natalia Slusarz
Heriot-Watt University, Edinburgh, UK
L
Luca Arnaboldi
University of Birmingham, Birmingham, UK
Ekaterina Komendantskaya
Ekaterina Komendantskaya
Professor in Computer Science, Southampton University and Heriot-Watt University, UK
LogicAI VerificationLogic ProgrammingTheorem ProvingMachine Learning