๐ค AI Summary
This study addresses the challenge of accurately detecting and localizing construction-related objects in dynamic and complex road work zones for autonomous vehicles. The authors propose a multi-sensor fusion approach that integrates the YOLO deep neural network with LiDAR point clouds to enable real-time object detection, clustering, and mapping into world coordinates, thereby constructing a semantic outline of the construction area. By innovatively combining a newly collected real-world dataset from Berlin with an enhanced U.S.-based dataset, the method achieves, for the first time, holistic semantic understanding of construction scenes with sub-meter localization accuracy. Experimental results demonstrate that the system attains positioning precision better than 0.5 meters in real-world construction environments, significantly enhancing the safety and navigational capability of autonomous vehicles operating in such challenging zones.
๐ Abstract
Road construction sites create major challenges for both autonomous vehicles and human drivers due to their highly dynamic and heterogeneous nature. This paper presents a real-time system that detects and localizes roadworks by combining a YOLO neural network with LiDAR data. The system identifies individual roadwork objects while driving, merges them into coherent construction sites and records their outlines in world coordinates. The model training was based on an adapted US dataset and a new dataset collected from test drives with a prototype vehicle in Berlin, Germany. Evaluations on real-world road construction sites showed a localization accuracy below 0.5 m. The system can support traffic authorities with up-to-date roadwork data and could enable autonomous vehicles to navigate construction sites more safely in the future.