🤖 AI Summary
Current evaluation methods for video anomaly understanding (VAU) struggle to accurately assess models’ fine-grained descriptive capabilities regarding anomalous events, often diverging from human perception. This work reframes VAU as a tripartite parsing task—capturing the anomaly’s “What,” the involved entities “Who,” and the spatial context “Where”—and introduces FineW3, a new benchmark dataset, along with FVScore, a human-aligned evaluation metric. FVScore enables the first interpretable, fine-grained assessment of large vision-language models (LVLMs) based on their coverage of critical visual elements. Structured automatic annotations augment manual labeling, and scoring is grounded in key visual components. Human evaluations demonstrate that FVScore significantly outperforms existing metrics. Experiments further reveal that LVLMs underperform on tasks requiring spatiotemporal fine-grained reasoning but excel in scenarios with static or strong visual cues.
📝 Abstract
Video Anomaly Understanding (VAU) is a novel task focused on describing unusual occurrences in videos. Despite growing interest, the evaluation of VAU remains an open challenge. Existing benchmarks rely on n-gram-based metrics (e.g., BLEU, ROUGE-L) or LLM-based evaluation. The first fails to capture the rich, free-form, and visually grounded nature of LVLM responses, while the latter focuses on assessing language quality over factual relevance, often resulting in subjective judgments that are misaligned with human perception. In this work, we address this issue by proposing FineVAU, a new benchmark for VAU that shifts the focus towards rich, fine-grained and domain-specific understanding of anomalous videos. We formulate VAU as a three-fold problem, with the goal of comprehensively understanding key descriptive elements of anomalies in video: events (What), participating entities (Who) and location (Where). Our benchmark introduces a) FVScore, a novel, human-aligned evaluation metric that assesses the presence of critical visual elements in LVLM answers, providing interpretable, fine-grained feedback; and b) FineW3, a novel, comprehensive dataset curated through a structured and fully automatic procedure that augments existing human annotations with high quality, fine-grained visual information. Human evaluation reveals that our proposed metric has a superior alignment with human perception of anomalies in comparison to current approaches. Detailed experiments on FineVAU unveil critical limitations in LVLM's ability to perceive anomalous events that require spatial and fine-grained temporal understanding, despite strong performance on coarse grain, static information, and events with strong visual cues.