🤖 AI Summary
Existing collaborative perception datasets severely lack adverse-weather scenarios, hindering the robustness of autonomous driving in complex environments. To address this, we introduce WeatherCoPerception—the first open-source, multimodal, adverse-weather-oriented synthetic collaborative perception dataset. It encompasses six weather conditions (including a novel, photorealistic glare model) and 110 high-difficulty road scenes derived from real-world traffic accidents. Generated using CARLA and OpenCDA, the dataset provides synchronized heterogeneous vehicle–infrastructure sensor data (LiDAR, RGB, semantic cameras, GNSS/IMU) alongside high-fidelity 3D bounding boxes and instance-level semantic annotations (24K frames, >890K labels). Our work pioneers realistic glare modeling and accident-driven scenario design in collaborative perception and releases an open benchmark. Experiments demonstrate substantial performance degradation under adverse weather (e.g., CoBEVT AP@30/50/70 drops to 58.30/52.44/38.90), validating the dataset’s utility for enhancing model robustness.
📝 Abstract
Adverse weather conditions pose a significant challenge to the widespread adoption of Autonomous Vehicles (AVs) by impacting sensors like LiDARs and cameras. Even though Collaborative Perception (CP) improves AV perception in difficult conditions, existing CP datasets lack adverse weather conditions. To address this, we introduce Adver-City, the first open-source synthetic CP dataset focused on adverse weather conditions. Simulated in CARLA with OpenCDA, it contains over 24 thousand frames, over 890 thousand annotations, and 110 unique scenarios across six different weather conditions: clear weather, soft rain, heavy rain, fog, foggy heavy rain and, for the first time in a synthetic CP dataset, glare. It has six object categories including pedestrians and cyclists, and uses data from vehicles and roadside units featuring LiDARs, RGB and semantic segmentation cameras, GNSS, and IMUs. Its scenarios, based on real crash reports, depict the most relevant road configurations for adverse weather and poor visibility conditions, varying in object density, with both dense and sparse scenes, allowing for novel testing conditions of CP models. Benchmarks run on the dataset show that weather conditions created challenging conditions for perception models, with CoBEVT scoring 58.30/52.44/38.90 (AP@30/50/70). The dataset, code and documentation are available at https://labs.cs.queensu.ca/quarrg/datasets/adver-city/.