π€ AI Summary
This work proposes LAKE, a training-free framework that challenges the prevailing practice of treating vision-language models (VLMs) as black-box feature extractors by uncovering their intrinsic anomaly detection capabilities. The study reveals, for the first time, the existence of sparse anomaly-sensitive neurons within pretrained VLMs and demonstrates that these neurons can be activated using only a few normal samples to construct a compact representation of normality. By integrating neuron activation analysis, cross-modal semantic alignment, and modeling of structural biases, LAKE achieves state-of-the-art performance on multiple industrial anomaly detection benchmarks without requiring fine-tuning or adapter modules. Moreover, the method offers neuron-level interpretability, providing insights into the modelβs decision-making process.
π Abstract
Large-scale vision-language models (VLMs) exhibit remarkable zero-shot capabilities, yet the internal mechanisms driving their anomaly detection (AD) performance remain poorly understood. Current methods predominantly treat VLMs as black-box feature extractors, assuming that anomaly-specific knowledge must be acquired through external adapters or memory banks. In this paper, we challenge this assumption by arguing that anomaly knowledge is intrinsically embedded within pre-trained models but remains latent and under-activated. We hypothesize that this knowledge is concentrated within a sparse subset of anomaly-sensitive neurons. To validate this, we propose latent anomaly knowledge excavation (LAKE), a training-free framework that identifies and elicits these critical neuronal signals using only a minimal set of normal samples. By isolating these sensitive neurons, LAKE constructs a highly compact normality representation that integrates visual structural deviations with cross-modal semantic activations. Extensive experiments on industrial AD benchmarks demonstrate that LAKE achieves state-of-the-art performance while providing intrinsic, neuron-level interpretability. Ultimately, our work advocates for a paradigm shift: redefining anomaly detection as the targeted activation of latent pre-trained knowledge rather than the acquisition of a downstream task.