mmWave Radar-Based Non-Line-of-Sight Pedestrian Localization at T-Junctions Utilizing Road Layout Extraction via Camera

📅 2025-08-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Accurate pedestrian localization in non-line-of-sight (NLoS) urban scenarios—such as T-junctions—is challenging: millimeter-wave (mmWave) radar suffers from multipath-induced point cloud spatial distortion, while cameras lack depth perception and cannot observe occluded regions. To address this, we propose the first vision-guided mmWave radar framework for NLoS pedestrian localization. Our method leverages camera-based semantic segmentation to extract road-structure priors, which guide spatiotemporal alignment and geometric reasoning over radar point clouds, effectively suppressing multipath-induced localization errors. By performing cross-modal image-radar joint reconstruction, our approach achieves stable, high-precision pedestrian localization within NLoS regions on a real-world vehicular platform. Experimental results demonstrate significant improvements in robustness and accuracy for complex urban perception, establishing a new paradigm for multimodal NLoS sensing.

Technology Category

Application Category

📝 Abstract
Pedestrians Localization in Non-Line-of-Sight (NLoS) regions within urban environments poses a significant challenge for autonomous driving systems. While mmWave radar has demonstrated potential for detecting objects in such scenarios, the 2D radar point cloud (PCD) data is susceptible to distortions caused by multipath reflections, making accurate spatial inference difficult. Additionally, although camera images provide high-resolution visual information, they lack depth perception and cannot directly observe objects in NLoS regions. In this paper, we propose a novel framework that interprets radar PCD through road layout inferred from camera for localization of NLoS pedestrians. The proposed method leverages visual information from the camera to interpret 2D radar PCD, enabling spatial scene reconstruction. The effectiveness of the proposed approach is validated through experiments conducted using a radar-camera system mounted on a real vehicle. The localization performance is evaluated using a dataset collected in outdoor NLoS driving environments, demonstrating the practical applicability of the method.
Problem

Research questions and friction points this paper is trying to address.

Localizing NLoS pedestrians using mmWave radar and camera data
Overcoming radar distortions from multipath reflections in urban areas
Enhancing spatial inference by fusing radar and visual road layout
Innovation

Methods, ideas, or system contributions that make the work stand out.

mmWave radar and camera fusion for NLoS localization
Road layout extraction enhances radar PCD interpretation
Spatial scene reconstruction from 2D radar and visual data
🔎 Similar Papers
No similar papers found.
B
Byeonggyu Park
Seoul National University
H
Hee-Yeun Kim
Seoul National University
B
Byonghyok Choi
Samsung Electro-Mechanics Co., Ltd.
Hansang Cho
Hansang Cho
Vice President of Technology, Samsung Electro-Mechanics
AI application in manufacturingMachine LearningDeep Learning
B
Byungkwan Kim
Chungnam National University
Soomok Lee
Soomok Lee
Ajou University
Machine learning and smart mobility
M
Mingu Jeon
Seoul National University
S
Seong-Woo Kim
Seoul National University