🤖 AI Summary
To address the challenge that deep neural networks trained with cross-entropy loss struggle to distinguish in-distribution (ID) from out-of-distribution (OOD) samples, this paper proposes a category-ranking-based OOD detection method. The core innovation is the first integration of the Plackett–Luce loss—a ranking loss from learning-to-rank—into OOD detection: an ID-pretrained model first captures intrinsic category preference orders among ID samples; subsequently, this learned ranking structure serves as supervision for fine-tuning via ranking-specific optimization, thereby sharpening the ID/OOD decision boundary. Crucially, the method requires neither additional annotations nor OOD data, significantly enhancing the model’s capacity to model fine-grained probability distributions. Evaluated on the TinyImageNet-based near-OOD benchmark, it achieves state-of-the-art performance, reducing false positive rate at 95% true positive rate (FPR95) by 4.3%.
📝 Abstract
We propose RankOOD, a rank-based Out-of-Distribution (OOD) detection approach based on training a model with the Placket-Luce loss, which is now extensively used for preference alignment tasks in foundational models. Our approach is based on the insight that with a deep learning model trained using the Cross Entropy Loss, in-distribution (ID) class prediction induces a ranking pattern for each ID class prediction. The RankOOD framework formalizes the insight by first extracting a rank list for each class using an initial classifier and then uses another round of training with the Plackett-Luce loss, where the class rank, a fixed permutation for each class, is the predicted variable. An OOD example may get assigned with high probability to an ID example, but the probability of it respecting the ranking classification is likely to be small. RankOOD, achieves SOTA performance on the near-ODD TinyImageNet evaluation benchmark, reducing FPR95 by 4.3%.