GROOD: Gradient-Aware Out-of-Distribution Detection

📅 2023-12-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing out-of-distribution (OOD) detection methods struggle to distinguish near-OOD samples and rely heavily on extensive hyperparameter tuning, limiting practical applicability. To address this, we propose a gradient-aware, prototype-driven OOD detection framework. First, class prototypes are constructed from in-distribution (ID) data, and synthetic OOD prototypes are manually generated. Second, we introduce a nearest-class-prototype loss and—novelly—apply input gradient analysis specifically with respect to the synthetic OOD prototypes, enabling clear separation of ID and OOD samples in the supervised feature space. Our method requires no distributional assumptions or complex hyperparameter search, offering both theoretical interpretability and engineering simplicity. Evaluated on benchmarks including ImageNet-1K, it significantly outperforms state-of-the-art approaches, particularly improving near-OOD detection accuracy and model robustness.
📝 Abstract
Out-of-distribution (OOD) detection is crucial for ensuring the reliability of deep learning models in real-world applications. Existing methods typically focus on feature representations or output-space analysis, often assuming a distribution over these spaces or leveraging gradient norms with respect to model parameters. However, these approaches struggle to distinguish near-OOD samples and often require extensive hyper-parameter tuning, limiting their practicality. In this work, we propose GRadient-aware Out-Of-Distribution detection (GROOD), a method that derives an OOD prototype from synthetic samples and computes class prototypes directly from In-distribution (ID) training data. By analyzing the gradients of a nearest-class-prototype loss function concerning an artificial OOD prototype, our approach achieves a clear separation between in-distribution and OOD samples. Experimental evaluations demonstrate that gradients computed from the OOD prototype enhance the distinction between ID and OOD data, surpassing established baselines in robustness, particularly on ImageNet-1k. These findings highlight the potential of gradient-based methods and prototype-driven approaches in advancing OOD detection within deep neural networks.
Problem

Research questions and friction points this paper is trying to address.

Detecting near-OOD samples effectively in deep learning models
Reducing hyper-parameter tuning for practical OOD detection methods
Enhancing ID and OOD separation using gradient-aware prototype analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses gradient-aware OOD prototype for detection
Leverages nearest-class-prototype loss function
Enhances ID-OOD separation with synthetic samples
🔎 Similar Papers
No similar papers found.