🤖 AI Summary
Formal verification of autonomous systems incorporating deep neural networks (DNNs) faces significant challenges—including large-scale models, high environmental uncertainty, and difficulty in quantifying safety guarantees. This paper proposes a scenario-decomposition-based probabilistic compositional verification framework: first, decomposing complex dynamic environments into tractable, modelable sub-scenarios; second, constructing compact, environment-adaptive probabilistic abstractions of perception DNNs for each sub-scenario; and third, integrating SMT-based symbolic reasoning with a novel accelerated proof rule that provides provable error bounds, enabling efficient end-to-end verification. This work establishes the first scenario-driven compositional verification paradigm, overcoming the scalability limitations of monolithic, whole-system verification. Evaluated on aircraft taxiway guidance and F1Tenth autonomous driving simulations, our approach achieves 10×–100× speedup in verification time while successfully quantifying and bounding system failure probabilities under diverse environmental conditions.
📝 Abstract
Recent advances in deep learning have enabled the development of autonomous systems that use deep neural networks for perception. Formal verification of these systems is challenging due to the size and complexity of the perception DNNs as well as hard-to-quantify, changing environment conditions. To address these challenges, we propose a probabilistic verification framework for autonomous systems based on the following key concepts: (1) Scenario-based Modeling: We decompose the task (e.g., car navigation) into a composition of scenarios, each representing a different environment condition. (2) Probabilistic Abstractions: For each scenario, we build a compact abstraction of perception based on the DNN's performance on an offline dataset that represents the scenario's environment condition. (3) Symbolic Reasoning and Acceleration: The abstractions enable efficient compositional verification of the autonomous system via symbolic reasoning and a novel acceleration proof rule that bounds the error probability of the system under arbitrary variations of environment conditions. We illustrate our approach on two case studies: an experimental autonomous system that guides airplanes on taxiways using high-dimensional perception DNNs and a simulation model of an F1Tenth autonomous car using LiDAR observations.