🤖 AI Summary
To address the challenge of balancing accuracy and real-time performance for semantic segmentation on high-resolution, 128-line automotive LiDAR point clouds, this work proposes a lightweight and efficient framework tailored for autonomous driving. First, we introduce the first automotive-grade, 128-line LiDAR dataset captured in urban traffic scenarios. Second, we pioneer the incorporation of surface normals as a strong geometric prior to enhance model robustness against point cloud sparsity and occlusion. Third, we design a co-optimized encoder-inference architecture specifically adapted to high-resolution LiDAR data and implement an end-to-end deployable system within the ROS2 framework. Experimental results demonstrate real-time inference at over 30 FPS on our proprietary dataset while achieving state-of-the-art accuracy. The codebase, dataset, and real-vehicle deployment validation are publicly released.
📝 Abstract
In recent studies, numerous previous works emphasize the importance of semantic segmentation of LiDAR data as a critical component to the development of driver-assistance systems and autonomous vehicles. However, many state-of-the-art methods are tested on outdated, lower-resolution LiDAR sensors and struggle with real-time constraints. This study introduces a novel semantic segmentation framework tailored for modern high-resolution LiDAR sensors that addresses both accuracy and real-time processing demands. We propose a novel LiDAR dataset collected by a cutting-edge automotive 128 layer LiDAR in urban traffic scenes. Furthermore, we propose a semantic segmentation method utilizing surface normals as strong input features. Our approach is bridging the gap between cutting-edge research and practical automotive applications. Additionaly, we provide a Robot Operating System (ROS2) implementation that we operate on our research vehicle. Our dataset and code are publicly available: https://github.com/kav-institute/SemanticLiDAR.