🤖 AI Summary
This work addresses the challenges of multi-class visual anomaly detection—namely, high category diversity, scarcity of anomalous samples, and deceptive defect appearances that hinder precise localization—by proposing an unsupervised anomaly localization framework based on cross-modal prompting. The method introduces class-specific textual prompts for both normal and anomalous conditions for the first time, leveraging CLIP for semantic guidance. It integrates multi-scale convolutional features, Transformer-based spatial attention, and a diffusion-inspired iterative refinement mechanism, while employing Focal Loss to enhance learning in hard-to-detect regions. Evaluated on MVTec-AD, the approach achieves state-of-the-art pixel-level performance with an average AUC of 98.35% and an AP of 66.54%, demonstrating significant improvements in localization accuracy and robustness across diverse object categories.
📝 Abstract
Visual anomaly detection in multi-class settings poses significant challenges due to the diversity of object categories, the scarcity of anomalous examples, and the presence of camouflaged defects. In this paper, we propose PromptMAD, a cross-modal prompting framework for unsupervised visual anomaly detection and localization that integrates semantic guidance through vision-language alignment. By leveraging CLIP-encoded text prompts describing both normal and anomalous class-specific characteristics, our method enriches visual reconstruction with semantic context, improving the detection of subtle and textural anomalies. To further address the challenge of class imbalance at the pixel level, we incorporate Focal loss function, which emphasizes hard-to-detect anomalous regions during training. Our architecture also includes a supervised segmentor that fuses multi-scale convolutional features with Transformer-based spatial attention and diffusion iterative refinement, yielding precise and high-resolution anomaly maps. Extensive experiments on the MVTec-AD dataset demonstrate that our method achieves state-of-the-art pixel-level performance, improving mean AUC to 98.35% and AP to 66.54%, while maintaining efficiency across diverse categories.