Out-of-Distribution Detection & Applications With Ablated Learned Temperature Energy

📅 2024-01-22
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address the critical issue of deep models assigning spuriously high-confidence predictions to out-of-distribution (OOD) samples in high-stakes scenarios—thereby undermining decision reliability—this paper proposes AbeT, a training-free OOD detection method that leverages only in-distribution (ID) data. AbeT implicitly characterizes the OOD boundary by explicitly modeling misclassified ID samples, eliminating the need for multi-stage training, auxiliary hyperparameters, or test-time backpropagation. Furthermore, it introduces a learnable temperature-scaled energy score and a structured feature pruning mechanism to enhance both OOD discrimination efficiency and robustness. Evaluated across image classification, object detection, and semantic segmentation, AbeT reduces false positive rate at 95% true positive rate (FPR@95) by 43.43% and 41.48%, increases area under the ROC curve (AUROC) by 5.15%, and improves area under the precision-recall curve (AUPRC) by 34.20%. These gains significantly strengthen uncertainty calibration and deployment safety.

Technology Category

Application Category

📝 Abstract
As deep neural networks become adopted in high-stakes domains, it is crucial to identify when inference inputs are Out-of-Distribution (OOD) so that users can be alerted of likely drops in performance and calibration despite high confidence -- ultimately to know when networks' decisions (and their uncertainty in those decisions) should be trusted. In this paper we introduce Ablated Learned Temperature Energy (or"AbeT"for short), an OOD detection method which lowers the False Positive Rate at 95% True Positive Rate (FPR@95) by $43.43%$ in classification compared to state of the art without training networks in multiple stages or requiring hyperparameters or test-time backward passes. We additionally provide empirical insights as to why our model learns to distinguish between In-Distribution (ID) and OOD samples while only being explicitly trained on ID samples via exposure to misclassified ID examples at training time. Lastly, we show the efficacy of our method in identifying predicted bounding boxes and pixels corresponding to OOD objects in object detection and semantic segmentation, respectively -- with an AUROC increase of $5.15%$ in object detection and both a decrease in FPR@95 of $41.48%$ and an increase in AUPRC of $34.20%$ in semantic segmentation compared to previous state of the art.
Problem

Research questions and friction points this paper is trying to address.

Detect Out-of-Distribution inputs to ensure model reliability
Improve OOD detection accuracy without multi-stage training
Extend OOD detection to object detection and segmentation tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

AbeT method reduces OOD false positives significantly
Single-stage training without hyperparameters or backward passes
Improves OOD detection in object and pixel tasks
🔎 Similar Papers
No similar papers found.
Will LeVine
Will LeVine
Microsoft
AIMachine LearningDeep Learning
Benjamin Pikus
Benjamin Pikus
Advex AI
Machine LearningComputer VisionFew-Shot LearningLarge Language ModelsCalibration
J
Jacob Phillips
Andreessen Horowitz
B
Berk Norman
Anduril
F
Fernando Amat Gil
Google
S
Sean Hendryx
Scale AI