Do LLMs Understand Visual Anomalies? Uncovering LLM's Capabilities in Zero-shot Anomaly Detection

📅 2024-04-15
🏛️ ACM Multimedia
📈 Citations: 14
Influential: 1
📄 PDF
🤖 AI Summary
Existing zero-shot visual anomaly detection (VAD) methods rely on static anomaly prompts, suffering from semantic ambiguity, and only model global image–text alignment, limiting pixel-level localization accuracy. To address these limitations, we propose ALFA—a training-free framework for zero-shot VAD. First, ALFA introduces runtime dynamic prompt adaptation and context-aware scoring to mitigate prompt ambiguity. Second, it designs a fine-grained aligner that enables multi-scale (global-to-local) image–text semantic alignment. ALFA integrates large language model (LLM)-based prompt generation, context-sensitive scoring, and zero-shot inference—requiring neither anomaly samples nor model fine-tuning. Evaluated on MVTec and VisA benchmarks, ALFA achieves absolute improvements of 12.1% and 8.9% in the PRO metric, respectively, substantially outperforming prior zero-shot VAD approaches.

Technology Category

Application Category

📝 Abstract
Large vision-language models (LVLMs) are markedly proficient in deriving visual representations guided by natural language. Recent explorations have utilized LVLMs to tackle zero-shot visual anomaly detection (VAD) challenges by pairing images with textual descriptions indicative of normal and abnormal conditions, referred to as anomaly prompts. However, existing approaches depend on static anomaly prompts that are prone to cross-semantic ambiguity, and prioritize global image-level representations over crucial local pixel-level image-to-text alignment that is necessary for accurate anomaly localization. In this paper, we present ALFA, a training-free approach designed to address these challenges via a unified model. We propose a run-time prompt adaptation strategy, which first generates informative anomaly prompts to leverage the capabilities of a large language model (LLM). This strategy is enhanced by a contextual scoring mechanism for per-image anomaly prompt adaptation and cross-semantic ambiguity mitigation. We further introduce a novel fine-grained aligner to fuse local pixel-level semantics for precise anomaly localization, by projecting the image-text alignment from global to local semantic spaces. Extensive evaluations on MVTec and VisA datasets confirm ALFA's effectiveness in harnessing the language potential for zero-shot VAD, achieving significant PRO improvements of 12.1% on MVTec and 8.9% on VisA compared to state-of-the-art approaches.
Problem

Research questions and friction points this paper is trying to address.

Addresses cross-semantic ambiguity in anomaly prompts
Improves local pixel-level image-to-text alignment
Enhances zero-shot visual anomaly detection accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Run-time prompt adaptation for dynamic anomaly prompts
Contextual scoring to mitigate cross-semantic ambiguity
Fine-grained aligner for local pixel-level anomaly localization
🔎 Similar Papers
No similar papers found.