RankOOD - Class Ranking-based Out-of-Distribution Detection

📅 2025-11-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge that deep neural networks trained with cross-entropy loss struggle to distinguish in-distribution (ID) from out-of-distribution (OOD) samples, this paper proposes a category-ranking-based OOD detection method. The core innovation is the first integration of the Plackett–Luce loss—a ranking loss from learning-to-rank—into OOD detection: an ID-pretrained model first captures intrinsic category preference orders among ID samples; subsequently, this learned ranking structure serves as supervision for fine-tuning via ranking-specific optimization, thereby sharpening the ID/OOD decision boundary. Crucially, the method requires neither additional annotations nor OOD data, significantly enhancing the model’s capacity to model fine-grained probability distributions. Evaluated on the TinyImageNet-based near-OOD benchmark, it achieves state-of-the-art performance, reducing false positive rate at 95% true positive rate (FPR95) by 4.3%.

Technology Category

Application Category

📝 Abstract
We propose RankOOD, a rank-based Out-of-Distribution (OOD) detection approach based on training a model with the Placket-Luce loss, which is now extensively used for preference alignment tasks in foundational models. Our approach is based on the insight that with a deep learning model trained using the Cross Entropy Loss, in-distribution (ID) class prediction induces a ranking pattern for each ID class prediction. The RankOOD framework formalizes the insight by first extracting a rank list for each class using an initial classifier and then uses another round of training with the Plackett-Luce loss, where the class rank, a fixed permutation for each class, is the predicted variable. An OOD example may get assigned with high probability to an ID example, but the probability of it respecting the ranking classification is likely to be small. RankOOD, achieves SOTA performance on the near-ODD TinyImageNet evaluation benchmark, reducing FPR95 by 4.3%.
Problem

Research questions and friction points this paper is trying to address.

Proposes RankOOD for detecting out-of-distribution examples using ranking patterns
Trains models with Plackett-Luce loss to capture class ranking distributions
Reduces false positive rates in near-OOD detection benchmarks significantly
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Plackett-Luce loss for training
Extracts rank lists for each class
Achieves state-of-the-art OOD detection performance
🔎 Similar Papers
No similar papers found.