🤖 AI Summary
This work addresses the limitations of existing video anomaly detection methods, which often rely on large models and single-frame prediction errors, thereby hindering deployment on edge devices and neglecting long-term temporal consistency. To overcome these challenges, we propose FoGA, a lightweight model built upon a U-Net architecture that incorporates a gated context aggregation module to dynamically fuse encoder-decoder features. FoGA introduces forward consistency learning—achieved through a novel forward consistency loss combined with a hybrid anomaly scoring strategy—marking the first such approach in the field. With only approximately 2 million parameters and a runtime speed of 155 FPS, FoGA achieves state-of-the-art performance across multiple benchmarks, offering both high detection accuracy and practical feasibility for edge deployment.
📝 Abstract
As a crucial element of public security, video anomaly detection (VAD) aims to measure deviations from normal patterns for various events in real-time surveillance systems. However, most existing VAD methods rely on large-scale models to pursue extreme accuracy, limiting their feasibility on resource-limited edge devices. Moreover, mainstream prediction-based VAD detects anomalies using only single-frame future prediction errors, overlooking the richer constraints from longer-term temporal forward information. In this paper, we introduce FoGA, a lightweight VAD model that performs Forward consistency learning with Gated context Aggregation, containing about 2M parameters and tailored for potential edge devices. Specifically, we propose a Unet-based method that performs feature extraction on consecutive frames to generate both immediate and forward predictions. Then, we introduce a gated context aggregation module into the skip connections to dynamically fuse encoder and decoder features at the same spatial scale. Finally, the model is jointly optimized with a novel forward consistency loss, and a hybrid anomaly measurement strategy is adopted to integrate errors from both immediate and forward frames for more accurate detection. Extensive experiments demonstrate the effectiveness of the proposed method, which substantially outperforms state-of-the-art competing methods, running up to 155 FPS. Hence, our FoGA achieves an excellent trade-off between performance and the efficiency metric.