🤖 AI Summary
Existing traffic video datasets exhibit insufficient capability in detecting vulnerable road users (VRUs) within complex Asian urban scenarios. Method: We introduce DAVE, the first large-scale, atomic-level visual element dataset tailored for intricate Asian traffic environments, featuring dense annotations of over 13 million bounding boxes across 16 VRU categories and 16 challenging dynamic behaviors (e.g., zigzag crossing, U-turns). We propose a novel multi-dimensional environmental sampling paradigm—jointly considering weather, time-of-day, and traffic congestion—to elevate VRU instance density to 41.13%, a 73% increase over Waymo. Furthermore, we adopt multi-granularity spatiotemporal labeling (ID + behavior + action sequences) and cross-modal task adaptation. Contribution/Results: State-of-the-art models suffer significant performance degradation on DAVE, confirming its high challenge level. DAVE establishes a high-fidelity benchmark for VRU perception, advancing robust visual understanding in real-world traffic scenarios.
📝 Abstract
Most existing traffic video datasets including Waymo are structured, focusing predominantly on Western traffic, which hinders global applicability. Specifically, most Asian scenarios are far more complex, involving numerous objects with distinct motions and behaviors. Addressing this gap, we present a new dataset, DAVE, designed for evaluating perception methods with high representation of Vulnerable Road Users (VRUs: e.g. pedestrians, animals, motorbikes, and bicycles) in complex and unpredictable environments. DAVE is a manually annotated dataset encompassing 16 diverse actor categories (spanning animals, humans, vehicles, etc.) and 16 action types (complex and rare cases like cut-ins, zigzag movement, U-turn, etc.), which require high reasoning ability. DAVE densely annotates over 13 million bounding boxes (bboxes) actors with identification, and more than 1.6 million boxes are annotated with both actor identification and action/behavior details. The videos within DAVE are collected based on a broad spectrum of factors, such as weather conditions, the time of day, road scenarios, and traffic density. DAVE can benchmark video tasks like Tracking, Detection, Spatiotemporal Action Localization, Language-Visual Moment retrieval, and Multi-label Video Action Recognition. Given the critical importance of accurately identifying VRUs to prevent accidents and ensure road safety, in DAVE, vulnerable road users constitute 41.13% of instances, compared to 23.71% in Waymo. DAVE provides an invaluable resource for the development of more sensitive and accurate visual perception algorithms in the complex real world. Our experiments show that existing methods suffer degradation in performance when evaluated on DAVE, highlighting its benefit for future video recognition research.