🤖 AI Summary
Video anomaly detection (VAD) faces challenges including high computational overhead, unstable spatiotemporal localization, and poor real-time deployability when leveraging vision-language models (VLMs). To address these, we propose a two-stage cascaded VLM framework: (1) offline learning of normal behavioral rules from unlabeled video data, and (2) online inference guided by a lightweight motion-mask prompt that dynamically highlights discriminative motion regions, coupled with a rule-based deviation detection mechanism—eliminating the need for explicit abnormal pattern enumeration. A filtering module and fine-grained VLM reasoning further enable efficient hierarchical decision-making. Evaluated on four standard benchmarks, our method achieves 57.68 FPS—151.79× faster than the state-of-the-art—while maintaining 97.2% detection accuracy. Key innovations include motion-aware prompting for focused visual grounding and rule-driven zero-shot deviation quantification, jointly ensuring real-time performance, robustness, and strong generalization across unseen anomalies.
📝 Abstract
Video anomaly detection (VAD) has rapidly advanced by recent development of Vision-Language Models (VLMs). While these models offer superior zero-shot detection capabilities, their immense computational cost and unstable visual grounding performance hinder real-time deployment. To overcome these challenges, we introduce Cerberus, a two-stage cascaded system designed for efficient yet accurate real-time VAD. Cerberus learns normal behavioral rules offline, and combines lightweight filtering with fine-grained VLM reasoning during online inference. The performance gains of Cerberus come from two key innovations: motion mask prompting and rule-based deviation detection. The former directs the VLM's attention to regions relevant to motion, while the latter identifies anomalies as deviations from learned norms rather than enumerating possible anomalies. Extensive evaluations on four datasets show that Cerberus on average achieves 57.68 fps on an NVIDIA L40S GPU, a 151.79$ imes$ speedup, and 97.2% accuracy comparable to the state-of-the-art VLM-based VAD methods, establishing it as a practical solution for real-time video analytics.