🤖 AI Summary
To address the insufficient robustness of field-of-view (FOV) estimation for autonomous vehicles under sensor perception or transmission attacks, this paper proposes a probabilistic segmentation-based, uncertainty-aware FOV estimation method. The approach integrates semantic segmentation with classical graphics-based FOV modeling, augmented by Monte Carlo Dropout and confidence-map-based anomaly detection to quantify segmentation uncertainty. Key contributions include: (1) the first FOV-annotated dataset tailored to autonomous driving scenarios; (2) a novel uncertainty-aware FOV segmentation framework that jointly leverages deep learning and geometric priors; and (3) significant improvements in FOV estimation robustness and cross-environment generalization under adversarial conditions—while maintaining real-time inference capability. Experimental evaluation across diverse attack settings demonstrates both verifiability and deployability, establishing a practical, safety-enhancing solution for perception under sensor compromise.
📝 Abstract
Attacks on sensing and perception threaten the safe deployment of autonomous vehicles (AVs). Security-aware sensor fusion helps mitigate threats but requires accurate field of view (FOV) estimation which has not been evaluated autonomy. To address this gap, we adapt classical computer graphics algorithms to develop the first autonomy-relevant FOV estimators and create the first datasets with ground truth FOV labels. Unfortunately, we find that these approaches are themselves highly vulnerable to attacks on sensing. To improve robustness of FOV estimation against attacks, we propose a learning-based segmentation model that captures FOV features, integrates Monte Carlo dropout (MCD) for uncertainty quantification, and performs anomaly detection on confidence maps. We illustrate through comprehensive evaluations attack resistance and strong generalization across environments. Architecture trade studies demonstrate the model is feasible for real-time deployment in multiple applications.