VADMamba++: Efficient Video Anomaly Detection via Hybrid Modeling in Grayscale Space

📅 2026-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing video anomaly detection methods that rely on auxiliary inputs such as optical flow and involve complex multi-task frameworks ill-suited for single-task scenarios. To overcome these issues, the authors propose a single-task, auxiliary-input-free approach based on Gray-to-RGB frame reconstruction. By reconstructing grayscale frames into RGB images, the method jointly models structural geometry and color fidelity within a unified proxy task and leverages their dual inconsistency for anomaly detection. The architecture integrates a hybrid backbone combining Mamba, CNN, and Transformer modules, and computes anomaly scores by fusing quantized feature reconstruction errors with future-frame prediction errors. Evaluated under a strict single-task setting using only raw video frames, the proposed method achieves state-of-the-art performance across three benchmark datasets, demonstrating both high accuracy and computational efficiency.
📝 Abstract
VADMamba pioneered the introduction of Mamba to Video Anomaly Detection (VAD), achieving high accuracy and fast inference through hybrid proxy tasks. Nevertheless, its heavy reliance on optical flow as auxiliary input and inter-task fusion scoring constrains its applicability to a single proxy task. In this paper, we introduce VADMamba++, an efficient VAD method based on the Gray-to-RGB paradigm that enforces a Single-Channel to Three-Channel reconstruction mapping, designed for a single proxy task and operating without auxiliary inputs. This paradigm compels inferring color appearances from grayscale structures, allowing anomalies to be more effectively revealed through dual inconsistencies between structure and chromatic cues. Specifically, VADMamba++ reconstructs grayscale frames into the RGB space to simultaneously discriminate structural geometry and chromatic fidelity, thereby enhancing sensitivity to explicit visual anomalies. We further design a hybrid modeling backbone that integrates Mamba, CNN, and Transformer modules to capture diverse normal patterns while suppressing the appearance of anomalies. Furthermore, an intra-task fusion scoring strategy integrates explicit future-frame prediction errors with implicit quantized feature errors, further improving accuracy under a single task setting. Extensive experiments on three benchmark datasets demonstrate that VADMamba++ outperforms state-of-the-art methods while meeting performance and efficiency, especially under a strict single-task setting with only frame-level inputs.
Problem

Research questions and friction points this paper is trying to address.

Video Anomaly Detection
Single Proxy Task
Grayscale Input
Auxiliary-Free
Frame-Level Anomaly
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gray-to-RGB paradigm
hybrid modeling
Mamba architecture
single-task anomaly detection
dual inconsistency
🔎 Similar Papers
No similar papers found.
J
Jihao Lyu
Shaanxi Key Laboratory for Network Computing and Security Technology, School of Computer Science and Engineering, Xi’an University of Technology, Xi’an, 710048, China
M
Minghua Zhao
Shaanxi Key Laboratory for Network Computing and Security Technology, School of Computer Science and Engineering, Xi’an University of Technology, Xi’an, 710048, China
Jing Hu
Jing Hu
Associate professor, School of Computer Science and Engineering, Xi'an University of Technology
hyperspectral image processing
Yifei Chen
Yifei Chen
Master of CS. Xi'an University of Technology
Video Anomaly DetectionComputer VisionFacial Expression Recognition
Shuangli Du
Shuangli Du
xi'an university of technology
deep learning
C
Cheng Shi
Shaanxi Key Laboratory for Network Computing and Security Technology, School of Computer Science and Engineering, Xi’an University of Technology, Xi’an, 710048, China