🤖 AI Summary
In video anomaly detection (VAD), existing likelihood-based generative methods struggle to identify localized “unseen” anomalies residing within the neighborhood of the normal data distribution, particularly due to perceptual blind spots across scene, motion, and appearance dimensions. To address this, we propose a novel VAD framework jointly modeling scene, motion, and appearance perception. Our method introduces: (1) a noise-conditioned score Transformer that jointly captures scene dependencies and motion dynamics; and (2) an autoregressive denoising score matching mechanism that progressively accumulates anomalous contextual cues. By integrating Gaussian noise injection, motion-weighted scoring, and multimodal feature fusion, our approach achieves state-of-the-art performance on ShanghaiTech, UCSD Ped2, and CUHK Avenue benchmarks. It significantly improves robustness in detecting subtle, localized, and in-distribution anomalies—overcoming key limitations of prior likelihood-based methods.
📝 Abstract
Video anomaly detection (VAD) is an important computer vision problem. Thanks to the mode coverage capabilities of generative models, the likelihood-based paradigm is catching growing interest, as it can model normal distribution and detect out-of-distribution anomalies. However, these likelihood-based methods are blind to the anomalies located in local modes near the learned distribution. To handle these ``unseen" anomalies, we dive into three gaps uniquely existing in VAD regarding scene, motion and appearance. Specifically, we first build a noise-conditioned score transformer for denoising score matching. Then, we introduce a scene-dependent and motion-aware score function by embedding the scene condition of input sequences into our model and assigning motion weights based on the difference between key frames of input sequences. Next, to solve the problem of blindness in principle, we integrate unaffected visual information via a novel autoregressive denoising score matching mechanism for inference. Through autoregressively injecting intensifying Gaussian noise into the denoised data and estimating the corresponding score function, we compare the denoised data with the original data to get a difference and aggregate it with the score function for an enhanced appearance perception and accumulate the abnormal context. With all three gaps considered, we can compute a more comprehensive anomaly indicator. Experiments on three popular VAD benchmarks demonstrate the state-of-the-art performance of our method.