🤖 AI Summary
This work addresses the limitations of existing video anomaly detection methods that rely on auxiliary inputs such as optical flow and involve complex multi-task frameworks ill-suited for single-task scenarios. To overcome these issues, the authors propose a single-task, auxiliary-input-free approach based on Gray-to-RGB frame reconstruction. By reconstructing grayscale frames into RGB images, the method jointly models structural geometry and color fidelity within a unified proxy task and leverages their dual inconsistency for anomaly detection. The architecture integrates a hybrid backbone combining Mamba, CNN, and Transformer modules, and computes anomaly scores by fusing quantized feature reconstruction errors with future-frame prediction errors. Evaluated under a strict single-task setting using only raw video frames, the proposed method achieves state-of-the-art performance across three benchmark datasets, demonstrating both high accuracy and computational efficiency.
📝 Abstract
VADMamba pioneered the introduction of Mamba to Video Anomaly Detection (VAD), achieving high accuracy and fast inference through hybrid proxy tasks. Nevertheless, its heavy reliance on optical flow as auxiliary input and inter-task fusion scoring constrains its applicability to a single proxy task. In this paper, we introduce VADMamba++, an efficient VAD method based on the Gray-to-RGB paradigm that enforces a Single-Channel to Three-Channel reconstruction mapping, designed for a single proxy task and operating without auxiliary inputs. This paradigm compels inferring color appearances from grayscale structures, allowing anomalies to be more effectively revealed through dual inconsistencies between structure and chromatic cues. Specifically, VADMamba++ reconstructs grayscale frames into the RGB space to simultaneously discriminate structural geometry and chromatic fidelity, thereby enhancing sensitivity to explicit visual anomalies. We further design a hybrid modeling backbone that integrates Mamba, CNN, and Transformer modules to capture diverse normal patterns while suppressing the appearance of anomalies. Furthermore, an intra-task fusion scoring strategy integrates explicit future-frame prediction errors with implicit quantized feature errors, further improving accuracy under a single task setting. Extensive experiments on three benchmark datasets demonstrate that VADMamba++ outperforms state-of-the-art methods while meeting performance and efficiency, especially under a strict single-task setting with only frame-level inputs.