Conservative Perception Models for Probabilistic Model Checking

📅 2025-03-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Learning-based perception modules (e.g., YOLO11) in autonomous driving systems resist formal verification under environmental uncertainty due to their black-box nature and stochastic behavior. Method: We propose a provably conservative probabilistic modeling and verification framework centered on the first construction of an Interval Markov Decision Process (IMDP) abstraction. This model is derived via statistical confidence estimation and output intervalization of perception, yielding a conservative closed-loop abstraction that soundly over-approximates all possible perceptual behaviors under uncertainty; its conservativeness is formally proven. Contribution/Results: Integrated with the CARLA simulation platform, the framework delivers system-level probabilistic safety guarantees for automatic emergency braking. Verification error bounds are analytically controllable, satisfying requirements of formal model checking. To our knowledge, this is the first approach offering both mathematical rigor and engineering practicality for probabilistic verification of black-box learning-based perception components.

Technology Category

Application Category

📝 Abstract
Verifying the behaviors of autonomous systems with learned perception components is a challenging problem due to the complexity of the perception and the uncertainty of operating environments. Probabilistic model checking is a powerful tool for providing guarantees on stochastic models of systems. However, constructing model-checkable models of black-box perception components for system-level mathematical guarantees has been an enduring challenge. In this paper, we propose a method for constructing provably conservative Interval Markov Decision Process (IMDP) models of closed-loop systems with perception components. We prove that our technique results in conservative abstractions with high probability. We evaluate our approach in an automatic braking case study using both a synthetic perception component and the object detector YOLO11 in the CARLA driving simulator.
Problem

Research questions and friction points this paper is trying to address.

Verify autonomous systems with learned perception components
Construct model-checkable models for black-box perception
Provide conservative abstractions for system-level guarantees
Innovation

Methods, ideas, or system contributions that make the work stand out.

Constructs conservative Interval Markov Decision Process models
Provides high-probability conservative abstractions
Evaluated with synthetic and YOLO11 perception components
🔎 Similar Papers
No similar papers found.