Safe Driving in Occluded Environments

📅 2025-10-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Occlusions render safety-critical states unobservable, undermining the long-term safety guarantees of both existing model-driven (set-invariance-based) and data-driven autonomous driving approaches. This paper proposes a latent-risk safety certification framework grounded in *probabilistic invariance*: it introduces probabilistic invariance—rather than deterministic set invariance—into safety certificate design, thereby relaxing the requirement of full state observability. By formulating probabilistic safety constraints, the framework quantifies and bounds risks arising from occluded regions, yielding verifiable linear action constraints compatible with both model predictive control and data-driven policies. Evaluated in CARLA under real-time constraints, the method significantly improves long-term safety in occlusion-prone scenarios while ensuring transparency of risk exposure and avoiding excessive conservatism.

Technology Category

Application Category

📝 Abstract
Ensuring safe autonomous driving in the presence of occlusions poses a significant challenge in its policy design. While existing model-driven control techniques based on set invariance can handle visible risks, occlusions create latent risks in which safety-critical states are not observable. Data-driven techniques also struggle to handle latent risks because direct mappings from risk-critical objects in sensor inputs to safe actions cannot be learned without visible risk-critical objects. Motivated by these challenges, in this paper, we propose a probabilistic safety certificate for latent risk. Our key technical enabler is the application of probabilistic invariance: It relaxes the strict observability requirements imposed by set-invariance methods that demand the knowledge of risk-critical states. The proposed techniques provide linear action constraints that confine the latent risk probability within tolerance. Such constraints can be integrated into model predictive controllers or embedded in data-driven policies to mitigate latent risks. The proposed method is tested using the CARLA simulator and compared with a few existing techniques. The theoretical and empirical analysis jointly demonstrate that the proposed methods assure long-term safety in real-time control in occluded environments without being overly conservative and with transparency to exposed risks.
Problem

Research questions and friction points this paper is trying to address.

Ensuring safe autonomous driving in occluded environments
Handling latent risks from unobservable safety-critical states
Providing probabilistic safety certificates for real-time control
Innovation

Methods, ideas, or system contributions that make the work stand out.

Probabilistic safety certificate for latent risk
Probabilistic invariance relaxes observability requirements
Linear action constraints confine latent risk probability
🔎 Similar Papers
No similar papers found.
Z
Zhuoyuan Wang
Department of Electrical and Computer Engineering, Carnegie Mellon University
T
Tongyao Jia
Department of Electrical and Computer Engineering, Carnegie Mellon University
P
Pharuj Rajborirug
Department of Electrical and Computer Engineering, Carnegie Mellon University
N
Neeraj Ramesh
Department of Electrical and Computer Engineering, Carnegie Mellon University
H
Hiroyuki Okuda
Department of Mechanical Systems Engineering, Nagoya University, Japan
T
Tatsuya Suzuki
Department of Mechanical Systems Engineering, Nagoya University, Japan
Soummya Kar
Soummya Kar
Electrical and Computer Engineering, Carnegie Mellon University
Large Scale Stochastic Systems
Yorie Nakahira
Yorie Nakahira
Assistant Professor, Carnegie Mellon University
Control and learningOptimizationAutonomous systemsLanguage-guided control