🤖 AI Summary
To address discontinuous lane perception in autonomous driving—caused by limited sensor line-of-sight, occlusions, and absence of high-definition (HD) maps—this paper proposes a vehicle-to-vehicle (V2V) cooperative lane fusion method based on spline interpolation. By sharing local lane detection outputs via V2V communication, vehicles collaboratively construct a distributed spline curve model to jointly estimate occluded or out-of-range lane segments in real time. This work introduces the collective perception paradigm to lane detection for the first time, and designs a lightweight, HD-map-free end-to-end fusion framework. Evaluated across diverse road scenarios, the method achieves real-time inference at ≥30 FPS, extends effective perception range by up to 200%, and significantly improves lane continuity and robustness—particularly in HD-map-absent regions.
📝 Abstract
Comprehensive environment perception is essential for autonomous vehicles to operate safely. It is crucial to detect both dynamic road users and static objects like traffic signs or lanes as these are required for safe motion planning. However, in many circumstances a complete perception of other objects or lanes is not achievable due to limited sensor ranges, occlusions, and curves. In scenarios where an accurate localization is not possible or for roads where no HD maps are available, an autonomous vehicle must rely solely on its perceived road information. Thus, extending local sensing capabilities through collective perception using vehicle-to-vehicle communication is a promising strategy that has not yet been explored for lane detection. Therefore, we propose a real-time capable approach for collective perception of lanes using a spline-based estimation of undetected road sections. We evaluate our proposed fusion algorithm in various situations and road types. We were able to achieve real-time capability and extend the perception range by up to 200%.