Exploiting Inter-Sample Information for Long-tailed Out-of-Distribution Detection

📅 2025-11-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high false positive rate in out-of-distribution (OOD) detection and weak tail-class discrimination under long-tailed distributions, this paper proposes a graph neural network–based approach for modeling inter-sample semantic relationships. Specifically, it constructs an initial graph over pre-trained feature embeddings, applies Gaussian normalization to calibrate activation distributions, and employs graph convolutional networks (GCNs) to refine feature representations—thereby explicitly encoding semantic correlations among samples and enhancing discriminability for tail classes. The method requires no resampling or relabeling. Evaluated on three long-tailed benchmarks—CIFAR10-LT, CIFAR100-LT, and ImageNet-LT—it significantly outperforms existing OOD detection methods, achieving an average 12.6% reduction in false positive rate (FPR) and a 9.3% improvement in tail-class classification accuracy. Its core innovation lies in jointly integrating sample-relational modeling and distribution calibration into a unified long-tailed OOD detection framework, thereby balancing robustness and fine-grained discriminative capability.

Technology Category

Application Category

📝 Abstract
Detecting out-of-distribution (OOD) data is essential for safe deployment of deep neural networks (DNNs). This problem becomes particularly challenging in the presence of long-tailed in-distribution (ID) datasets, often leading to high false positive rates (FPR) and low tail-class ID classification accuracy. In this paper, we demonstrate that exploiting inter-sample relationships using a graph-based representation can significantly improve OOD detection in long-tailed recognition of vision datasets. To this end, we use the feature space of a pre-trained model to initialize our graph structure. We account for the differences between the activation layer distribution of the pre-training vs. training data, and actively introduce Gaussianization to alleviate any deviations from a standard normal distribution in the activation layers of the pre-trained model. We then refine this initial graph representation using graph convolutional networks (GCNs) to arrive at a feature space suitable for long-tailed OOD detection. This leads us to address the inferior performance observed in ID tail-classes within existing OOD detection methods. Experiments over three benchmarks CIFAR10-LT, CIFAR100-LT, and ImageNet-LT demonstrate that our method outperforms the state-of-the-art approaches by a large margin in terms of FPR and tail-class ID classification accuracy.
Problem

Research questions and friction points this paper is trying to address.

Improving out-of-distribution detection for long-tailed datasets
Reducing false positive rates in tail-class ID classification
Enhancing OOD detection using graph-based inter-sample relationships
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses graph-based representation for inter-sample relationships
Applies Gaussianization to normalize activation layer distribution
Refines graph with GCNs for long-tailed OOD detection
🔎 Similar Papers
No similar papers found.
N
Nimeshika Udayangani
The University of Melbourne
H
Hadi M. Dolatabadi
The University of Melbourne
S
Sarah Erfani
The University of Melbourne
Christopher Leckie
Christopher Leckie
Professor, Computing and Information Systems, The University of Melbourne
artificial intelligencemachine learninganomaly detectionclusteringcyber security