🤖 AI Summary
Few-shot anomaly detection (FSAD) in industrial inspection faces the challenge of modeling normal data distributions from extremely limited samples. This paper proposes VisionAD, a training-free, pure nearest-neighbor framework that eliminates reliance on complex prompt engineering. It introduces three key innovations: (i) a support-query dual augmentation strategy, (ii) a multi-layer feature fusion mechanism, and (iii) a class-aware visual memory bank—enabling generalizable multi-class anomaly detection even under single-shot settings. Built upon scalable vision foundation models, VisionAD constructs an efficient class-aware memory index via dual-path data augmentation and cross-level feature integration. Evaluated on MVTec-AD, VisA, and Real-IAD, it achieves 97.4%, 94.8%, and 70.8% image-level AUROC using only one normal training image per class—surpassing all existing state-of-the-art methods.
📝 Abstract
Few-shot anomaly detection (FSAD) has emerged as a crucial yet challenging task in industrial inspection, where normal distribution modeling must be accomplished with only a few normal images. While existing approaches typically employ multi-modal foundation models combining language and vision modalities for prompt-guided anomaly detection, these methods often demand sophisticated prompt engineering and extensive manual tuning. In this paper, we demonstrate that a straightforward nearest-neighbor search framework can surpass state-of-the-art performance in both single-class and multi-class FSAD scenarios. Our proposed method, VisionAD, consists of four simple yet essential components: (1) scalable vision foundation models that extract universal and discriminative features; (2) dual augmentation strategies - support augmentation to enhance feature matching adaptability and query augmentation to address the oversights of single-view prediction; (3) multi-layer feature integration that captures both low-frequency global context and high-frequency local details with minimal computational overhead; and (4) a class-aware visual memory bank enabling efficient one-for-all multi-class detection. Extensive evaluations across MVTec-AD, VisA, and Real-IAD benchmarks demonstrate VisionAD's exceptional performance. Using only 1 normal images as support, our method achieves remarkable image-level AUROC scores of 97.4%, 94.8%, and 70.8% respectively, outperforming current state-of-the-art approaches by significant margins (+1.6%, +3.2%, and +1.4%). The training-free nature and superior few-shot capabilities of VisionAD make it particularly appealing for real-world applications where samples are scarce or expensive to obtain. Code is available at https://github.com/Qiqigeww/VisionAD.