Killing It With Zero-Shot: Adversarially Robust Novelty Detection

📅 2024-04-14
🏛️ IEEE International Conference on Acoustics, Speech, and Signal Processing
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the severe degradation of novelty detection performance under adversarial attacks, this paper proposes the first zero-shot, fine-tuning-free robust anomaly detection paradigm. Methodologically, it pioneers the integration of pretrained robust visual features (e.g., Robust ResNet) from ImageNet with parameter-free k-nearest neighbors (k-NN), enabling plug-and-play detection on unseen classes via distance-based scoring and adaptive thresholding. The framework requires no target-domain data or model adaptation, fully satisfying real-time and generalization requirements of automated safety-critical systems. Extensive evaluations demonstrate substantial improvements over state-of-the-art methods: under strong adversarial attacks—including PGD and FGSM—detection accuracy remains consistently above 90%, with robustness gains exceeding 40%. This work establishes a verifiable, deployment-ready pathway for trustworthy AI systems to reliably identify novel and adversarially perturbed inputs.

Technology Category

Application Category

📝 Abstract
Novelty Detection (ND) plays a crucial role in machine learning by identifying new or unseen data during model inference. This capability is especially important for the safe and reliable operation of automated systems. Despite advances in this field, existing techniques often fail to maintain their performance when subject to adversarial attacks. Our research addresses this gap by marrying the merits of nearest-neighbor algorithms with robust features obtained from models pretrained on ImageNet. We focus on enhancing the robustness and performance of ND algorithms. Experimental results demonstrate that our approach significantly outperforms current state-of-the-art methods across various benchmarks, particularly under adversarial conditions. By incorporating robust pretrained features into the k-NN algorithm, we establish a new standard for performance and robustness in the field of robust ND. This work opens up new avenues for research aimed at fortifying machine learning systems against adversarial vulnerabilities.
Problem

Research questions and friction points this paper is trying to address.

Machine Learning
Adversarial Attacks
Novelty Detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Nearest Neighbor Algorithm
ImageNet Features
Robustness in Novelty Detection
🔎 Similar Papers
No similar papers found.
Hossein Mirzaei
Hossein Mirzaei
PhD student @ Mackenzie Mathis Lab
Machine Learning
Mohammad Jafari
Mohammad Jafari
Sharif University of Technology, Tehran, Iran
H
Hamid Reza Dehbashi
Sharif University of Technology, Tehran, Iran
Z
Z. Taghavi
Sharif University of Technology, Tehran, Iran
Mohammad Sabokrou
Mohammad Sabokrou
Okinawa Institute of Science and Technology
Machine LearningComputer VisionTrustworthy AI
M
M. H. Rohban
Sharif University of Technology, Tehran, Iran