🤖 AI Summary
Existing VAD/VAU benchmarks suffer from limited scene diversity, imbalanced anomaly class distribution, and insufficient temporal complexity; moreover, VAU demands deep semantic understanding and causal reasoning, rendering manual annotation costly and unscalable. To address these limitations, we introduce Pistachio—the first fully generative video anomaly benchmark—built upon state-of-the-art video diffusion models. It employs scene-conditioned anomaly injection, multi-stage narrative planning, and long-sequence temporal consistency optimization to synthesize coherent, controllable 41-second abnormal videos. Pistachio enables diverse, balanced, and long-horizon (multi-event) anomaly modeling, effectively mitigating real-data biases and annotation bottlenecks. Experiments demonstrate that Pistachio surpasses existing benchmarks in scale, diversity, and temporal complexity, exposing critical weaknesses of mainstream methods in dynamic, long-duration anomaly understanding. It establishes a new standard and identifies key challenges for future VAU research.
📝 Abstract
Automatically detecting abnormal events in videos is crucial for modern autonomous systems, yet existing Video Anomaly Detection (VAD) benchmarks lack the scene diversity, balanced anomaly coverage, and temporal complexity needed to reliably assess real-world performance. Meanwhile, the community is increasingly moving toward Video Anomaly Understanding (VAU), which requires deeper semantic and causal reasoning but remains difficult to benchmark due to the heavy manual annotation effort it demands. In this paper, we introduce Pistachio, a new VAD/VAU benchmark constructed entirely through a controlled, generation-based pipeline. By leveraging recent advances in video generation models, Pistachio provides precise control over scenes, anomaly types, and temporal narratives, effectively eliminating the biases and limitations of Internet-collected datasets. Our pipeline integrates scene-conditioned anomaly assignment, multi-step storyline generation, and a temporally consistent long-form synthesis strategy that produces coherent 41-second videos with minimal human intervention. Extensive experiments demonstrate the scale, diversity, and complexity of Pistachio, revealing new challenges for existing methods and motivating future research on dynamic and multi-event anomaly understanding.