🤖 AI Summary
Autonomous driving semantic segmentation models frequently misclassify unknown objects as known classes and struggle to distinguish rare known classes from genuine out-of-distribution (OOD) instances. To address this, we propose an uncertainty-aware pixel-wise OOD detection method that uniquely integrates evidential deep learning with likelihood ratio testing. Our approach decouples semantic feature extraction from uncertainty modeling, enabling explicit discrimination between rare known samples and synthetic or real anomalies. Specifically, we leverage the Dirichlet distribution output by an evidential classifier to estimate the likelihood ratio, thereby enhancing outlier exposure efficiency. Evaluated on five benchmark datasets, our method achieves a mean false positive rate of only 2.5% and a mean precision of 90.91%, with negligible computational overhead. It significantly outperforms state-of-the-art approaches in both accuracy and robustness.
📝 Abstract
Semantic segmentation models trained on known object classes often fail in real-world autonomous driving scenarios by confidently misclassifying unknown objects. While pixel-wise out-of-distribution detection can identify unknown objects, existing methods struggle in complex scenes where rare object classes are often confused with truly unknown objects. We introduce an uncertainty-aware likelihood ratio estimation method that addresses these limitations. Our approach uses an evidential classifier within a likelihood ratio test to distinguish between known and unknown pixel features from a semantic segmentation model, while explicitly accounting for uncertainty. Instead of producing point estimates, our method outputs probability distributions that capture uncertainty from both rare training examples and imperfect synthetic outliers. We show that by incorporating uncertainty in this way, outlier exposure can be leveraged more effectively. Evaluated on five standard benchmark datasets, our method achieves the lowest average false positive rate (2.5%) among state-of-the-art while maintaining high average precision (90.91%) and incurring only negligible computational overhead. Code is available at https://github.com/glasbruch/ULRE.