ACDC: the Adverse Conditions Dataset with Correspondences for Robust Semantic Driving Scene Perception.

📅 2021-04-27
🏛️ IEEE Transactions on Pattern Analysis and Machine Intelligence
📈 Citations: 9
Influential: 0
📄 PDF
🤖 AI Summary
Current autonomous driving vision perception models exhibit severely degraded dense semantic understanding under adverse weather conditions—such as fog, nighttime, rain, and snow—primarily due to the absence of large-scale, multi-condition, pixel-level annotated benchmark datasets. Method: We introduce the first large-scale driving-scene dataset covering these four adverse conditions (8,012 images: 4,006 adverse-condition images and 1,503 paired normal-condition images), featuring novel normal–adverse image pairs and binary uncertainty masks to enable cross-condition generalization modeling and uncertainty-aware learning. All images are annotated with high-precision panoptic segmentation labels. Contribution/Results: We establish a unified evaluation framework encompassing semantic, instance, panoptic, and uncertainty-aware segmentation. Empirical evaluation reveals substantial performance degradation of state-of-the-art methods under adverse conditions, demonstrating that this dataset fills a critical gap as the first authoritative benchmark for dense perception in adverse weather.
📝 Abstract
Level-5 driving automation requires a robust visual perception system that can parse input images under any condition. However, existing driving datasets for dense semantic perception are either dominated by images captured under normal conditions or are small in scale. To address this, we introduce ACDC, the Adverse Conditions Dataset with Correspondences for training and testing methods for diverse semantic perception tasks on adverse visual conditions. ACDC consists of a large set of 8012 images, half of which (4006) are equally distributed between four common adverse conditions: fog, nighttime, rain, and snow. Each adverse-condition image comes with a high-quality pixel-level panoptic annotation, a corresponding image of the same scene under normal conditions, and a binary mask that distinguishes between intra-image regions of clear and uncertain semantic content. 1503 of the corresponding normal-condition images feature panoptic annotations, raising the total annotated images to 5509. ACDC supports the standard tasks of semantic segmentation, object detection, instance segmentation, and panoptic segmentation, as well as the newly introduced uncertainty-aware semantic segmentation. A detailed empirical study demonstrates the challenges that the adverse domains of ACDC pose to state-of-the-art supervised and unsupervised approaches and indicates the value of our dataset in steering future progress in the field. Our dataset and benchmark are publicly available at https://acdc.vision.ee.ethz.ch.
Problem

Research questions and friction points this paper is trying to address.

Addresses lack of large-scale driving datasets for adverse weather conditions
Provides pixel-level annotated images for robust semantic perception tasks
Enables evaluation of vision algorithms under fog, night, rain and snow
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dataset includes 8012 adverse condition driving images
Provides pixel-level panoptic annotations with uncertainty masks
Supports multiple segmentation tasks and uncertainty-aware evaluation
🔎 Similar Papers
No similar papers found.