AnomalyCLIP: Object-agnostic Prompt Learning for Zero-shot Anomaly Detection

πŸ“… 2023-10-29
πŸ›οΈ International Conference on Learning Representations
πŸ“ˆ Citations: 114
✨ Influential: 20
πŸ“„ PDF
πŸ€– AI Summary
Zero-shot anomaly detection (ZSAD) aims to identify anomalies across domains without access to target-domain training samples; however, its generalizability is severely hindered by substantial discrepancies in foreground objects, anomaly appearances, and background distributions. To address this, we propose a CLIP-based universal ZSAD framework. Our method introduces the first object-agnostic text prompt learning mechanism, decoupling foreground semantics from normal/abnormal discrimination modeling. We further design learnable, domain-agnostic β€œnormal” and β€œabnormal” text prompts and leverage vision-language feature alignment to enable zero-shot anomaly scoring and pixel-level localization. Critically, our approach requires no target-domain annotations or fine-tuning. Extensive experiments across 17 industrial defect and medical imaging datasets demonstrate significant improvements over existing ZSAD methods, achieving both strong cross-domain generalization and high localization accuracy.
πŸ“ Abstract
Zero-shot anomaly detection (ZSAD) requires detection models trained using auxiliary data to detect anomalies without any training sample in a target dataset. It is a crucial task when training data is not accessible due to various concerns, eg, data privacy, yet it is challenging since the models need to generalize to anomalies across different domains where the appearance of foreground objects, abnormal regions, and background features, such as defects/tumors on different products/organs, can vary significantly. Recently large pre-trained vision-language models (VLMs), such as CLIP, have demonstrated strong zero-shot recognition ability in various vision tasks, including anomaly detection. However, their ZSAD performance is weak since the VLMs focus more on modeling the class semantics of the foreground objects rather than the abnormality/normality in the images. In this paper we introduce a novel approach, namely AnomalyCLIP, to adapt CLIP for accurate ZSAD across different domains. The key insight of AnomalyCLIP is to learn object-agnostic text prompts that capture generic normality and abnormality in an image regardless of its foreground objects. This allows our model to focus on the abnormal image regions rather than the object semantics, enabling generalized normality and abnormality recognition on diverse types of objects. Large-scale experiments on 17 real-world anomaly detection datasets show that AnomalyCLIP achieves superior zero-shot performance of detecting and segmenting anomalies in datasets of highly diverse class semantics from various defect inspection and medical imaging domains. Code will be made available at https://github.com/zqhang/AnomalyCLIP.
Problem

Research questions and friction points this paper is trying to address.

Adapting CLIP for zero-shot anomaly detection across domains
Learning object-agnostic prompts for generic abnormality recognition
Improving detection of anomalies without target dataset training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Object-agnostic prompt learning for anomaly detection
Adapts CLIP to focus on abnormal regions
Generic normality and abnormality text prompts
πŸ”Ž Similar Papers
No similar papers found.