VERA: Explainable Video Anomaly Detection via Verbalized Learning of Vision-Language Models

πŸ“… 2024-12-02
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing video anomaly detection (VAD) methods rely on model fine-tuning or auxiliary inference modules, incurring high computational overhead and demanding expensive fine-grained annotations. To address these limitations, we propose VERAβ€”a parameter-free, interpretable VAD framework. VERA decomposes complex reasoning into optimization-friendly, guiding questions via data-driven collaboration between two vision-language models (VLMs) acting as learner and optimizer; questions serve as implicit parameters, eliminating the need for parameter updates. By jointly modeling scene- and temporal-contextual information, VERA enables simultaneous segment- to frame-level anomaly localization and natural language explanation generation. It requires only coarse-grained supervision and supports end-to-end training. On mainstream benchmarks, VERA achieves significant improvements in both detection accuracy and explanation fidelity, demonstrating its low annotation dependency, strong generalization across domains, and lightweight adaptability.

Technology Category

Application Category

πŸ“ Abstract
The rapid advancement of vision-language models (VLMs) has established a new paradigm in video anomaly detection (VAD): leveraging VLMs to simultaneously detect anomalies and provide comprehendible explanations for the decisions. Existing work in this direction often assumes the complex reasoning required for VAD exceeds the capabilities of pretrained VLMs. Consequently, these approaches either incorporate specialized reasoning modules during inference or rely on instruction tuning datasets through additional training to adapt VLMs for VAD. However, such strategies often incur substantial computational costs or data annotation overhead. To address these challenges in explainable VAD, we introduce a verbalized learning framework named VERA that enables VLMs to perform VAD without model parameter modifications. Specifically, VERA automatically decomposes the complex reasoning required for VAD into reflections on simpler, more focused guiding questions capturing distinct abnormal patterns. It treats these reflective questions as learnable parameters and optimizes them through data-driven verbal interactions between learner and optimizer VLMs, using coarsely labeled training data. During inference, VERA embeds the learned questions into model prompts to guide VLMs in generating segment-level anomaly scores, which are then refined into frame-level scores via the fusion of scene and temporal contexts. Experimental results on challenging benchmarks demonstrate that the learned questions of VERA are highly adaptable, significantly improving both detection performance and explainability of VLMs for VAD.
Problem

Research questions and friction points this paper is trying to address.

Enables VLMs to detect video anomalies without model modifications
Reduces computational costs and data annotation overhead in VAD
Improves anomaly detection performance and explainability via learned questions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decomposes complex reasoning into simpler guiding questions
Optimizes questions via verbal interactions between VLMs
Refines anomaly scores with scene and temporal contexts
πŸ”Ž Similar Papers
No similar papers found.