Learning to Tell Apart: Weakly Supervised Video Anomaly Detection via Disentangled Semantic Alignment

๐Ÿ“… 2025-11-13
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing weakly supervised video anomaly detection methods rely on multimodal foundation models (e.g., CLIP) and multiple instance learning, making them susceptible to saliency bias, limiting their ability to discover diverse normal patterns, and suffering from category confusion due to visual similarityโ€”thus hindering fine-grained classification. To address these issues, we propose the Decoupled Semantic Alignment Network (DSAN). First, we introduce a self-guided normality modeling branch that leverages normal prototypes to drive feature reconstruction, explicitly uncovering intrinsic normal patterns. Second, we design an event-background decoupled contrastive semantic alignment mechanism to separate anomalous and normal representations, enhancing class discriminability. DSAN integrates frame-level scoring, temporal decomposition, vision-language contrastive learning, and reconstruction. Extensive experiments demonstrate that DSAN significantly outperforms state-of-the-art methods on XD-Violence and UCF-Crime, achieving concurrent improvements in both anomaly detection accuracy and fine-grained classification performance.

Technology Category

Application Category

๐Ÿ“ Abstract
Recent advancements in weakly-supervised video anomaly detection have achieved remarkable performance by applying the multiple instance learning paradigm based on multimodal foundation models such as CLIP to highlight anomalous instances and classify categories. However, their objectives may tend to detect the most salient response segments, while neglecting to mine diverse normal patterns separated from anomalies, and are prone to category confusion due to similar appearance, leading to unsatisfactory fine-grained classification results. Therefore, we propose a novel Disentangled Semantic Alignment Network (DSANet) to explicitly separate abnormal and normal features from coarse-grained and fine-grained aspects, enhancing the distinguishability. Specifically, at the coarse-grained level, we introduce a self-guided normality modeling branch that reconstructs input video features under the guidance of learned normal prototypes, encouraging the model to exploit normality cues inherent in the video, thereby improving the temporal separation of normal patterns and anomalous events. At the fine-grained level, we present a decoupled contrastive semantic alignment mechanism, which first temporally decomposes each video into event-centric and background-centric components using frame-level anomaly scores and then applies visual-language contrastive learning to enhance class-discriminative representations. Comprehensive experiments on two standard benchmarks, namely XD-Violence and UCF-Crime, demonstrate that DSANet outperforms existing state-of-the-art methods.
Problem

Research questions and friction points this paper is trying to address.

Separates abnormal and normal video features at coarse-grained level
Enhances fine-grained classification via disentangled semantic alignment
Addresses category confusion in weakly supervised anomaly detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Disentangled Semantic Alignment Network separates abnormal and normal features
Self-guided normality modeling branch reconstructs video using normal prototypes
Decoupled contrastive semantic alignment enhances class-discriminative representations
๐Ÿ”Ž Similar Papers
No similar papers found.
W
Wenti Yin
Key Laboratory of Image Processing and Intelligent Control, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology
H
Huaxin Zhang
Key Laboratory of Image Processing and Intelligent Control, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology
X
Xiang Wang
Key Laboratory of Image Processing and Intelligent Control, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology
Y
Yuqing Lu
Key Laboratory of Image Processing and Intelligent Control, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology
Y
Yicheng Zhang
Key Laboratory of Image Processing and Intelligent Control, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology
B
Bingquan Gong
Key Laboratory of Image Processing and Intelligent Control, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology
Jialong Zuo
Jialong Zuo
Zhejiang University
Speech SynthesisVoice Conversion
L
Li Yu
School of Electronic Information and Communications, Huazhong University of Science and Technology
C
Changxin Gao
Key Laboratory of Image Processing and Intelligent Control, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology
Nong Sang
Nong Sang
Huazhong University of Science and Technology
Computer Vision and Pattern Recognition