🤖 AI Summary
Occlusions render safety-critical states unobservable, undermining the long-term safety guarantees of both existing model-driven (set-invariance-based) and data-driven autonomous driving approaches. This paper proposes a latent-risk safety certification framework grounded in *probabilistic invariance*: it introduces probabilistic invariance—rather than deterministic set invariance—into safety certificate design, thereby relaxing the requirement of full state observability. By formulating probabilistic safety constraints, the framework quantifies and bounds risks arising from occluded regions, yielding verifiable linear action constraints compatible with both model predictive control and data-driven policies. Evaluated in CARLA under real-time constraints, the method significantly improves long-term safety in occlusion-prone scenarios while ensuring transparency of risk exposure and avoiding excessive conservatism.
📝 Abstract
Ensuring safe autonomous driving in the presence of occlusions poses a significant challenge in its policy design. While existing model-driven control techniques based on set invariance can handle visible risks, occlusions create latent risks in which safety-critical states are not observable. Data-driven techniques also struggle to handle latent risks because direct mappings from risk-critical objects in sensor inputs to safe actions cannot be learned without visible risk-critical objects. Motivated by these challenges, in this paper, we propose a probabilistic safety certificate for latent risk. Our key technical enabler is the application of probabilistic invariance: It relaxes the strict observability requirements imposed by set-invariance methods that demand the knowledge of risk-critical states. The proposed techniques provide linear action constraints that confine the latent risk probability within tolerance. Such constraints can be integrated into model predictive controllers or embedded in data-driven policies to mitigate latent risks. The proposed method is tested using the CARLA simulator and compared with a few existing techniques. The theoretical and empirical analysis jointly demonstrate that the proposed methods assure long-term safety in real-time control in occluded environments without being overly conservative and with transparency to exposed risks.