🤖 AI Summary
Existing out-of-distribution (OOD) detection methods struggle to distinguish near-OOD samples and rely heavily on extensive hyperparameter tuning, limiting practical applicability. To address this, we propose a gradient-aware, prototype-driven OOD detection framework. First, class prototypes are constructed from in-distribution (ID) data, and synthetic OOD prototypes are manually generated. Second, we introduce a nearest-class-prototype loss and—novelly—apply input gradient analysis specifically with respect to the synthetic OOD prototypes, enabling clear separation of ID and OOD samples in the supervised feature space. Our method requires no distributional assumptions or complex hyperparameter search, offering both theoretical interpretability and engineering simplicity. Evaluated on benchmarks including ImageNet-1K, it significantly outperforms state-of-the-art approaches, particularly improving near-OOD detection accuracy and model robustness.
📝 Abstract
Out-of-distribution (OOD) detection is crucial for ensuring the reliability of deep learning models in real-world applications. Existing methods typically focus on feature representations or output-space analysis, often assuming a distribution over these spaces or leveraging gradient norms with respect to model parameters. However, these approaches struggle to distinguish near-OOD samples and often require extensive hyper-parameter tuning, limiting their practicality. In this work, we propose GRadient-aware Out-Of-Distribution detection (GROOD), a method that derives an OOD prototype from synthetic samples and computes class prototypes directly from In-distribution (ID) training data. By analyzing the gradients of a nearest-class-prototype loss function concerning an artificial OOD prototype, our approach achieves a clear separation between in-distribution and OOD samples. Experimental evaluations demonstrate that gradients computed from the OOD prototype enhance the distinction between ID and OOD data, surpassing established baselines in robustness, particularly on ImageNet-1k. These findings highlight the potential of gradient-based methods and prototype-driven approaches in advancing OOD detection within deep neural networks.