🤖 AI Summary
Existing cooperative perception datasets are often limited to single intersections or individual vehicles, resulting in insufficient scene diversity, model overfitting, and biased benchmarking. To address this, we introduce the first real-world cooperative perception dataset encompassing multiple intersections, vehicles, and roadside infrastructure—collected across three representative urban intersections in Ingolstadt, Germany. The system integrates 12 onboard RGB cameras, 2 onboard LiDARs, 17 roadside thermal cameras, and 12 roadside LiDARs; all sequences are spatiotemporally synchronized and annotated with 3D bounding boxes. The dataset comprises 34 synchronized 20-second sequences (~712k frames), covering 10 traffic object classes. We also release high-definition maps, a digital twin simulation environment, and an open-source codebase. This dataset significantly enhances cross-scenario generalization evaluation and establishes a more robust, realistic benchmark for vehicle-infrastructure cooperative perception algorithms.
📝 Abstract
Recent cooperative perception datasets have played a crucial role in advancing smart mobility applications by enabling information exchange between intelligent agents, helping to overcome challenges such as occlusions and improving overall scene understanding. While some existing real-world datasets incorporate both vehicle-to-vehicle and vehicle-to-infrastructure interactions, they are typically limited to a single intersection or a single vehicle. A comprehensive perception dataset featuring multiple connected vehicles and infrastructure sensors across several intersections remains unavailable, limiting the benchmarking of algorithms in diverse traffic environments. Consequently, overfitting can occur, and models may demonstrate misleadingly high performance due to similar intersection layouts and traffic participant behavior. To address this gap, we introduce UrbanIng-V2X, the first large-scale, multi-modal dataset supporting cooperative perception involving vehicles and infrastructure sensors deployed across three urban intersections in Ingolstadt, Germany. UrbanIng-V2X consists of 34 temporally aligned and spatially calibrated sensor sequences, each lasting 20 seconds. All sequences contain recordings from one of three intersections, involving two vehicles and up to three infrastructure-mounted sensor poles operating in coordinated scenarios. In total, UrbanIng-V2X provides data from 12 vehicle-mounted RGB cameras, 2 vehicle LiDARs, 17 infrastructure thermal cameras, and 12 infrastructure LiDARs. All sequences are annotated at a frequency of 10 Hz with 3D bounding boxes spanning 13 object classes, resulting in approximately 712k annotated instances across the dataset. We provide comprehensive evaluations using state-of-the-art cooperative perception methods and publicly release the codebase, dataset, HD map, and a digital twin of the complete data collection environment.