🤖 AI Summary
To address the longstanding trade-off between detection accuracy and inference speed in video anomaly detection, this paper pioneers the integration of state space models—specifically Mamba—into this task. We propose VQ-Mamba UNet, a novel architecture that synergistically combines vector quantization (VQ) with a non-negative visual state space (NVSS) module, enabling joint frame prediction and optical flow reconstruction within a multi-task learning framework. Additionally, we introduce a clip-level dual-branch evaluation strategy to enhance anomaly localization and scoring robustness. Our method achieves state-of-the-art inference speed on three mainstream benchmarks—outperforming CNN- and Transformer-based baselines by several-fold—while maintaining competitive detection accuracy. Experimental results demonstrate that state space models offer superior spatiotemporal modeling capacity and significant potential for real-time, high-fidelity video anomaly detection.
📝 Abstract
Video anomaly detection (VAD) methods are mostly CNN-based or Transformer-based, achieving impressive results, but the focus on detection accuracy often comes at the expense of inference speed. The emergence of state space models in computer vision, exemplified by the Mamba model, demonstrates improved computational efficiency through selective scans and showcases the great potential for long-range modeling. Our study pioneers the application of Mamba to VAD, dubbed VADMamba, which is based on multi-task learning for frame prediction and optical flow reconstruction. Specifically, we propose the VQ-Mamba Unet (VQ-MaU) framework, which incorporates a Vector Quantization (VQ) layer and Mamba-based Non-negative Visual State Space (NVSS) block. Furthermore, two individual VQ-MaU networks separately predict frames and reconstruct corresponding optical flows, further boosting accuracy through a clip-level fusion evaluation strategy. Experimental results validate the efficacy of the proposed VADMamba across three benchmark datasets, demonstrating superior performance in inference speed compared to previous work. Code is available at https://github.com/jLooo/VADMamba.