Leveraging Perturbation Robustness to Enhance Out-of-Distribution Detection

πŸ“… 2025-03-24
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the limited out-of-distribution (OOD) detection capability of deep learning models in open-world deployment, this paper proposes PROβ€”a lightweight, model-agnostic post-processing method requiring no architectural modification. PRO leverages the key insight that OOD samples exhibit significantly greater degradation in softmax prediction confidence under small input perturbations, and it is the first to formalize this differential perturbation robustness as an OOD discriminative signal. Specifically, PRO constructs an adversarial scoring function via gradient-based local perturbation optimization and subsequent softmax confidence analysis, ensuring compatibility with any pre-trained classifier. Evaluated on the OpenOOD benchmark, PRO reduces the false positive rate at 95% true positive rate (FPR@95) by over 10% on adversarially trained CIFAR-10 models. It establishes the new state-of-the-art among softmax-based post-processing methods for small-scale models, substantially advancing the performance frontier of open-set recognition.

Technology Category

Application Category

πŸ“ Abstract
Out-of-distribution (OOD) detection is the task of identifying inputs that deviate from the training data distribution. This capability is essential for safely deploying deep computer vision models in open-world environments. In this work, we propose a post-hoc method, Perturbation-Rectified OOD detection (PRO), based on the insight that prediction confidence for OOD inputs is more susceptible to reduction under perturbation than in-distribution (IND) inputs. Based on the observation, we propose an adversarial score function that searches for the local minimum scores near the original inputs by applying gradient descent. This procedure enhances the separability between IND and OOD samples. Importantly, the approach improves OOD detection performance without complex modifications to the underlying model architectures. We conduct extensive experiments using the OpenOOD benchmark~cite{yang2022openood}. Our approach further pushes the limit of softmax-based OOD detection and is the leading post-hoc method for small-scale models. On a CIFAR-10 model with adversarial training, PRO effectively detects near-OOD inputs, achieving a reduction of more than 10% on FPR@95 compared to state-of-the-art methods.
Problem

Research questions and friction points this paper is trying to address.

Enhancing OOD detection via perturbation robustness
Improving separability between IND and OOD samples
Boosting softmax-based detection without architectural changes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Perturbation-Rectified OOD detection method
Adversarial score function with gradient descent
Enhances separability without architectural changes
πŸ”Ž Similar Papers
No similar papers found.