MemoryOut: Learning Principal Features via Multimodal Sparse Filtering Network for Semi-supervised Video Anomaly Detection

📅 2025-06-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Video anomaly detection (VAD) faces two key challenges: (1) over-generalization of reconstruction/prediction models, leading to erroneous reconstruction of anomalies; and (2) insufficient modeling of high-level semantics when relying solely on low-level visual cues. To address these, we propose a semi-supervised VAD framework featuring two novel components: a Sparse Feature Filtering Module (SFFM) and a Dynamic Mixture-of-Experts (MoE) architecture—replacing conventional normal-prototype memory mechanisms. We further integrate a vision-language model (VLM) to jointly model semantic, appearance, and motion modalities. Additionally, we introduce a semantic similarity constraint and a motion frame-difference contrastive loss to enhance discriminative learning. Extensive experiments on multiple benchmark datasets demonstrate substantial improvements over state-of-the-art methods, validating the effectiveness and generalizability of our sparse-bottleneck filtering paradigm and multimodal semantic-guided modeling approach.

Technology Category

Application Category

📝 Abstract
Video Anomaly Detection (VAD) methods based on reconstruction or prediction face two critical challenges: (1) strong generalization capability often results in accurate reconstruction or prediction of abnormal events, making it difficult to distinguish normal from abnormal patterns; (2) reliance only on low-level appearance and motion cues limits their ability to identify high-level semantic in abnormal events from complex scenes. To address these limitations, we propose a novel VAD framework with two key innovations. First, to suppress excessive generalization, we introduce the Sparse Feature Filtering Module (SFFM) that employs bottleneck filters to dynamically and adaptively remove abnormal information from features. Unlike traditional memory modules, it does not need to memorize the normal prototypes across the training dataset. Further, we design the Mixture of Experts (MoE) architecture for SFFM. Each expert is responsible for extracting specialized principal features during running time, and different experts are selectively activated to ensure the diversity of the learned principal features. Second, to overcome the neglect of semantics in existing methods, we integrate a Vision-Language Model (VLM) to generate textual descriptions for video clips, enabling comprehensive joint modeling of semantic, appearance, and motion cues. Additionally, we enforce modality consistency through semantic similarity constraints and motion frame-difference contrastive loss. Extensive experiments on multiple public datasets validate the effectiveness of our multimodal joint modeling framework and sparse feature filtering paradigm. Project page at https://qzfm.github.io/sfn_vad_project_page/.
Problem

Research questions and friction points this paper is trying to address.

Suppress excessive generalization in video anomaly detection
Overcome neglect of high-level semantic cues
Enhance feature diversity via multimodal sparse filtering
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparse Feature Filtering Module dynamically removes abnormal information
Mixture of Experts architecture extracts specialized principal features
Vision-Language Model integrates semantic textual descriptions with visual cues
🔎 Similar Papers
No similar papers found.