🤖 AI Summary
Urban roadside parking creates non-line-of-sight (NLoS) occlusion zones, significantly increasing the risk of collisions caused by pedestrians suddenly entering traffic. Existing approaches rely on static HD maps or simplified reflection models, failing to adapt to spatial structural uncertainties induced by dynamically changing parked vehicles. This paper proposes a real-time monocular camera and 2D mmWave radar point cloud fusion framework for pedestrian localization. It dynamically identifies parked vehicles via image semantic segmentation and depth estimation, and models NLoS regions using radar diffraction/reflection characteristics to enable early, precise detection of occluded pedestrians. The key innovation lies in abandoning predefined geometric assumptions; instead, vision-guided dynamic scene understanding corrects radar-based spatial reasoning, thereby enhancing generalizability and robustness in complex urban environments. Real-world road experiments demonstrate that the method substantially increases pedestrian detection lead time and improves the safety response capability of autonomous driving systems within NLoS blind spots.
📝 Abstract
The presence of Non-Line-of-Sight (NLoS) blind spots resulting from roadside parking in urban environments poses a significant challenge to road safety, particularly due to the sudden emergence of pedestrians. mmWave technology leverages diffraction and reflection to observe NLoS regions, and recent studies have demonstrated its potential for detecting obscured objects. However, existing approaches predominantly rely on predefined spatial information or assume simple wall reflections, thereby limiting their generalizability and practical applicability. A particular challenge arises in scenarios where pedestrians suddenly appear from between parked vehicles, as these parked vehicles act as temporary spatial obstructions. Furthermore, since parked vehicles are dynamic and may relocate over time, spatial information obtained from satellite maps or other predefined sources may not accurately reflect real-time road conditions, leading to erroneous sensor interpretations. To address this limitation, we propose an NLoS pedestrian localization framework that integrates monocular camera image with 2D radar point cloud (PCD) data. The proposed method initially detects parked vehicles through image segmentation, estimates depth to infer approximate spatial characteristics, and subsequently refines this information using 2D radar PCD to achieve precise spatial inference. Experimental evaluations conducted in real-world urban road environments demonstrate that the proposed approach enhances early pedestrian detection and contributes to improved road safety. Supplementary materials are available at https://hiyeun.github.io/NLoS/.