🤖 AI Summary
This work addresses the significant degradation of visual perception performance in autonomous driving under low-light conditions, a challenge exacerbated by the lack of accurately aligned day–night image pairs from real-world dynamic driving scenarios. To bridge this gap, the authors propose an automated data collection framework based on Trajectory Tracking and Pose Matching (TTPM), enabling the creation of DarkDriving—the first real-world day–night image dataset with centimeter-level alignment, captured in a 69-acre closed test track. DarkDriving comprises 9,538 high-precision image pairs accompanied by human-annotated 2D bounding boxes. The dataset facilitates multi-task evaluation of low-light enhancement and its downstream impact on 2D/3D object detection, while also introducing four novel low-light enhancement tasks tailored to autonomous driving perception. Experiments demonstrate that models trained on DarkDriving not only achieve superior low-light enhancement but also generalize effectively to improve perception in other low-light driving benchmarks such as nuScenes.
📝 Abstract
The low-light conditions are challenging to the vision-centric perception systems for autonomous driving in the dark environment. In this paper, we propose a new benchmark dataset (named DarkDriving) to investigate the low-light enhancement for autonomous driving. The existing real-world low-light enhancement benchmark datasets can be collected by controlling various exposures only in small-ranges and static scenes. The dark images of the current nighttime driving datasets do not have the precisely aligned daytime counterparts. The extreme difficulty to collect a real-world day and night aligned dataset in the dynamic driving scenes significantly limited the research in this area. With a proposed automatic day-night Trajectory Tracking based Pose Matching (TTPM) method in a large real-world closed driving test field (area: 69 acres), we collected the first real-world day and night aligned dataset for autonomous driving in the dark environment. The DarkDriving dataset has 9,538 day and night image pairs precisely aligned in location and spatial contents, whose alignment error is in just several centimeters. For each pair, we also manually label the object 2D bounding boxes. DarkDriving introduces four perception related tasks, including low-light enhancement, generalized low-light enhancement, and low-light enhancement for 2D detection and 3D detection of autonomous driving in the dark environment. The experimental results show that our DarkDriving dataset provides a comprehensive benchmark for evaluating low-light enhancement for autonomous driving and it can also be generalized to enhance dark images and promote detection in some other low-light driving environment, such as nuScenes.