Learning Vision-Based Neural Network Controllers with Semi-Probabilistic Safety Guarantees

📅 2025-02-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the lack of formal safety guarantees for autonomous systems operating on visual inputs, this paper proposes the first semi-probabilistic safety verification framework integrating reachability analysis, conditional generative adversarial networks (cGANs), and distribution-free tail-bound estimation. We further design an end-to-end training paradigm synergizing a safety-aware loss function, critical-sample active sampling, and curriculum learning. The method jointly achieves high-fidelity perception modeling in high-dimensional visual spaces and rigorous, verifiable safety. Evaluated on X-Plane 11 landing, CARLA lane-following, and F1Tenth physical-platform tasks, it delivers semi-probabilistic safety guarantees with ≥99.9% confidence while matching the nominal performance of unconstrained models. Key contributions include: (i) the first scalable semi-probabilistic verification architecture; (ii) a perception–safety co-optimization paradigm; and (iii) empirical validation of high-confidence safety bounds under real-hardware closed-loop operation.

Technology Category

Application Category

📝 Abstract
Ensuring safety in autonomous systems with vision-based control remains a critical challenge due to the high dimensionality of image inputs and the fact that the relationship between true system state and its visual manifestation is unknown. Existing methods for learning-based control in such settings typically lack formal safety guarantees. To address this challenge, we introduce a novel semi-probabilistic verification framework that integrates reachability analysis with conditional generative adversarial networks and distribution-free tail bounds to enable efficient and scalable verification of vision-based neural network controllers. Next, we develop a gradient-based training approach that employs a novel safety loss function, safety-aware data-sampling strategy to efficiently select and store critical training examples, and curriculum learning, to efficiently synthesize safe controllers in the semi-probabilistic framework. Empirical evaluations in X-Plane 11 airplane landing simulation, CARLA-simulated autonomous lane following, and F1Tenth lane following in a physical visually-rich miniature environment demonstrate the effectiveness of our method in achieving formal safety guarantees while maintaining strong nominal performance. Our code is available at https://github.com/xhOwenMa/SPVT.
Problem

Research questions and friction points this paper is trying to address.

Ensuring safety in vision-based autonomous systems
Lack of formal safety guarantees in existing methods
High dimensionality and unknown state-visual relationship challenges
Innovation

Methods, ideas, or system contributions that make the work stand out.

Semi-probabilistic verification framework integration
Gradient-based training with safety loss function
Safety-aware data-sampling and curriculum learning
🔎 Similar Papers
No similar papers found.
Xinhang Ma
Xinhang Ma
Washington University in St. Louis
Machine learningartificial intelligencesecurity
J
Junlin Wu
Department of Computer Science & Engineering, Washington University in St. Louis, St. Louis, MO 63130, USA
Hussein Sibai
Hussein Sibai
Washington University in St. Louis
Control TheoryFormal MethodsMachine LearningRobotics
Y
Y. Kantaros
Department of Computer Science & Engineering, Washington University in St. Louis, St. Louis, MO 63130, USA
Yevgeniy Vorobeychik
Yevgeniy Vorobeychik
Washington University in Saint Louis
Artificial intelligenceadversarial machine learningcomputational game theorysecurity and privacy