4D Radar Ground Truth Augmentation with LiDAR-to-4D Radar Data Synthesis

📅 2025-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing 4D radar ground-truth augmentation (GT-Aug) naively adapts LiDAR-based methods, ignoring radar-specific physical responses—such as sidelobes—leading to distributional distortion in synthetic data. To address this, we propose a novel paradigm: first performing GT augmentation on LiDAR point clouds, then converting them into physically consistent 4D radar tensors via a physics-aware LiDAR-to-4D-Radar Synthesis module (L2RDaS). L2RDaS explicitly models both in-box and out-of-box radar responses—including main lobes and sidelobes—enabling the first GT-Aug formulation tailored to the 4D radar domain. Integrated within an end-to-end 4D tensor modeling and detection training framework, our approach achieves significant improvements in detection accuracy over conventional GT-Aug methods on the K-Radar benchmark. The source code is publicly available.

Technology Category

Application Category

📝 Abstract
Ground truth augmentation (GT-Aug) is a common method for LiDAR-based object detection, as it enhances object density by leveraging ground truth bounding boxes (GT bboxes). However, directly applying GT-Aug to 4D Radar tensor data overlooks important measurements outside the GT bboxes-such as sidelobes-leading to synthetic distributions that deviate from real-world 4D Radar data. To address this limitation, we propose 4D Radar Ground Truth Augmentation (4DR GT-Aug). Our approach first augments LiDAR data and then converts it to 4D Radar data via a LiDAR-to-4D Radar data synthesis (L2RDaS) module, which explicitly accounts for measurements both inside and outside GT bboxes. In doing so, it produces 4D Radar data distributions that more closely resemble real-world measurements, thereby improving object detection accuracy. Experiments on the K-Radar dataset show that the proposed method achieves improved performance compared to conventional GT-Aug in object detection for 4D Radar. The implementation code is available at https://github.com/kaist-avelab/K-Radar.
Problem

Research questions and friction points this paper is trying to address.

Enhances 4D Radar object detection accuracy
Addresses limitations of GT-Aug in 4D Radar data
Improves synthetic 4D Radar data realism
Innovation

Methods, ideas, or system contributions that make the work stand out.

Augments LiDAR data for 4D Radar
Uses LiDAR-to-4D Radar synthesis module
Improves 4D Radar object detection accuracy
🔎 Similar Papers
No similar papers found.
W
Woo-Jin Jung
CCS Graduate School of Mobility, Korea Advanced Institute of Science and Technology, 193, Munji-ro, Yuseong-gu, Daejeon, 34051, Daejeon, Republic of Korea
Dong-Hee Paek
Dong-Hee Paek
KAIST
4D RadarLiDARCameraSensor Fusion
S
Seung-Hyun Kong
CCS Graduate School of Mobility, Korea Advanced Institute of Science and Technology, 193, Munji-ro, Yuseong-gu, Daejeon, 34051, Daejeon, Republic of Korea