🤖 AI Summary
Neural-symbolic program verification suffers from a semantic gap—termed the “embedding gap”—between neural components and symbolic logic. Method: This paper formally defines the embedding gap and proposes an end-to-end formal verification framework comprising: (1) a domain-specific language (DSL) to declaratively specify problem-space properties; (2) a multi-backend compiler that enables declarative, compilable mapping from the problem space to the embedding space, seamlessly interfacing PyTorch, Marabou, and Lean; and (3) modular, co-verification across training environments, neural verifiers, and theorem provers. Contribution/Results: We demonstrate fully automated, reproducible, and mathematically rigorous safety verification on a simplified autonomous driving system equipped with a neural controller. Our approach systematically bridges the semantic divide between neural and symbolic verification, enabling principled integration of learning-based and logic-based reasoning within a unified formal framework.
📝 Abstract
Neuro-symbolic programs -- programs containing both machine learning components and traditional symbolic code -- are becoming increasingly widespread. However, we believe that there is still a lack of a general methodology for verifying these programs whose correctness depends on the behaviour of the machine learning components. In this paper, we identify the ``embedding gap'' -- the lack of techniques for linking semantically-meaningful ``problem-space'' properties to equivalent ``embedding-space'' properties -- as one of the key issues, and describe Vehicle, a tool designed to facilitate the end-to-end verification of neural-symbolic programs in a modular fashion. Vehicle provides a convenient language for specifying ``problem-space'' properties of neural networks and declaring their relationship to the ``embedding-space", and a powerful compiler that automates interpretation of these properties in the language of a chosen machine-learning training environment, neural network verifier, and interactive theorem prover. We demonstrate Vehicle's utility by using it to formally verify the safety of a simple autonomous car equipped with a neural network controller.