PromptMAD: Cross-Modal Prompting for Multi-Class Visual Anomaly Localization

📅 2026-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of multi-class visual anomaly detection—namely, high category diversity, scarcity of anomalous samples, and deceptive defect appearances that hinder precise localization—by proposing an unsupervised anomaly localization framework based on cross-modal prompting. The method introduces class-specific textual prompts for both normal and anomalous conditions for the first time, leveraging CLIP for semantic guidance. It integrates multi-scale convolutional features, Transformer-based spatial attention, and a diffusion-inspired iterative refinement mechanism, while employing Focal Loss to enhance learning in hard-to-detect regions. Evaluated on MVTec-AD, the approach achieves state-of-the-art pixel-level performance with an average AUC of 98.35% and an AP of 66.54%, demonstrating significant improvements in localization accuracy and robustness across diverse object categories.

Technology Category

Application Category

📝 Abstract
Visual anomaly detection in multi-class settings poses significant challenges due to the diversity of object categories, the scarcity of anomalous examples, and the presence of camouflaged defects. In this paper, we propose PromptMAD, a cross-modal prompting framework for unsupervised visual anomaly detection and localization that integrates semantic guidance through vision-language alignment. By leveraging CLIP-encoded text prompts describing both normal and anomalous class-specific characteristics, our method enriches visual reconstruction with semantic context, improving the detection of subtle and textural anomalies. To further address the challenge of class imbalance at the pixel level, we incorporate Focal loss function, which emphasizes hard-to-detect anomalous regions during training. Our architecture also includes a supervised segmentor that fuses multi-scale convolutional features with Transformer-based spatial attention and diffusion iterative refinement, yielding precise and high-resolution anomaly maps. Extensive experiments on the MVTec-AD dataset demonstrate that our method achieves state-of-the-art pixel-level performance, improving mean AUC to 98.35% and AP to 66.54%, while maintaining efficiency across diverse categories.
Problem

Research questions and friction points this paper is trying to address.

multi-class visual anomaly detection
anomaly localization
class diversity
anomalous sample scarcity
camouflaged defects
Innovation

Methods, ideas, or system contributions that make the work stand out.

cross-modal prompting
vision-language alignment
unsupervised anomaly localization
Focal loss
diffusion refinement
D
Duncan McCain
Holcombe Department of Electrical and Computer Engineering, Clemson University
H
Hossein Kashiani
Holcombe Department of Electrical and Computer Engineering, Clemson University
Fatemeh Afghah
Fatemeh Afghah
Associate Professor, Electrical and Computer Engineering Department, Clemson University
Wireless Communications5G/6GAI/MLMulti-modal large language modelsUAV systems