🤖 AI Summary
LogitNorm exhibits poor generalization in post-hoc out-of-distribution (OOD) detection and struggles to adapt across diverse detectors.
Method: We identify its intrinsic limitations and propose hyperparameter-free Extended Logit Normalization (ELogitNorm), which introduces a feature-distance-aware mechanism to dynamically model the separability between in-distribution (ID) and OOD samples in feature space during logit normalization—jointly enhancing OOD detection robustness and ID confidence calibration.
Contribution/Results: ELogitNorm requires no additional training or hyperparameter tuning and is natively compatible with various post-hoc detectors (e.g., ODIN, Energy, Mahalanobis). On standard benchmarks—including CIFAR, SVHN, and ImageNet—it consistently outperforms existing training-time OOD methods, achieving significant improvements in FPR95 and AUROC while preserving ID classification accuracy.
📝 Abstract
Out-of-distribution (OOD) detection is essential for the safe deployment of machine learning models. Recent advances have explored improved classification losses and representation learning strategies to enhance OOD detection. However, these methods are often tailored to specific post-hoc detection techniques, limiting their generalizability. In this work, we identify a critical issue in Logit Normalization (LogitNorm), which inhibits its effectiveness in improving certain post-hoc OOD detection methods. To address this, we propose Extended Logit Normalization ($ extbf{ELogitNorm}$), a novel hyperparameter-free formulation that significantly benefits a wide range of post-hoc detection methods. By incorporating feature distance-awareness to LogitNorm, $ extbf{ELogitNorm}$ shows more robust OOD separability and in-distribution (ID) confidence calibration than its predecessor. Extensive experiments across standard benchmarks demonstrate that our approach outperforms state-of-the-art training-time methods in OOD detection while maintaining strong ID classification accuracy.