Predictive Uncertainty Quantification for Bird's Eye View Segmentation: A Benchmark and Novel Loss Function

📅 2024-05-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the unreliability of BEV semantic segmentation models in autonomous driving due to the lack of uncertainty quantification (UQ). We introduce the first UQ benchmark specifically designed for BEV segmentation, systematically evaluating misclassification detection, out-of-distribution (OOD) pixel identification, and model calibration. Methodologically: (1) we pioneer the integration of evidential deep learning into BEV segmentation to jointly model aleatoric and epistemic uncertainty; (2) we propose the Uncertainty-aware Focal Cross-Entropy (UFCE) loss to mitigate extreme class imbalance; and (3) we introduce an uncertainty scaling regularization term to improve calibration. Extensive experiments across three datasets—including nuScenes—and three state-of-the-art architectures demonstrate significant improvements: +8.2% AUROC for OOD detection and a 42% reduction in Expected Calibration Error (ECE). Our analysis further uncovers critical limitations of existing UQ methods in BEV perception scenarios.

Technology Category

Application Category

📝 Abstract
The fusion of raw sensor data to create a Bird's Eye View (BEV) representation is critical for autonomous vehicle planning and control. Despite the growing interest in using deep learning models for BEV semantic segmentation, anticipating segmentation errors and enhancing the explainability of these models remain underexplored. This paper introduces a comprehensive benchmark for predictive uncertainty quantification in BEV segmentation, evaluating multiple uncertainty quantification methods across three popular datasets with three representative network architectures. Our study focuses on the effectiveness of quantified uncertainty in detecting misclassified and out-of-distribution (OOD) pixels while also improving model calibration. Through empirical analysis, we uncover challenges in existing uncertainty quantification methods and demonstrate the potential of evidential deep learning techniques, which capture both aleatoric and epistemic uncertainty. To address these challenges, we propose a novel loss function, Uncertainty-Focal-Cross-Entropy (UFCE), specifically designed for highly imbalanced data, along with a simple uncertainty-scaling regularization term that improves both uncertainty quantification and model calibration for BEV segmentation.
Problem

Research questions and friction points this paper is trying to address.

Quantify predictive uncertainty in BEV segmentation for autonomous vehicles.
Evaluate uncertainty methods to detect misclassified and OOD pixels.
Propose UFCE loss function for imbalanced data and model calibration.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces benchmark for BEV uncertainty quantification
Proposes novel UFCE loss for imbalanced data
Uses evidential deep learning for uncertainty types
🔎 Similar Papers
No similar papers found.
Linlin Yu
Linlin Yu
University of Texas at Dallas
Uncertainty EstimationTrustworthy AIGraph Neural NetworkNLP
B
Bowen Yang
Cypress Woods High School, Cypress, TX, USA
T
Tianhao Wang
Department of Computer Science, The University of Texas at Dallas, Richardson, TX, USA
K
Kangshuo Li
Department of Computer Science, The University of Texas at Dallas, Richardson, TX, USA
F
Feng Chen
Department of Computer Science, The University of Texas at Dallas, Richardson, TX, USA