🤖 AI Summary
Existing 4D radar ground-truth augmentation (GT-Aug) naively adapts LiDAR-based methods, ignoring radar-specific physical responses—such as sidelobes—leading to distributional distortion in synthetic data. To address this, we propose a novel paradigm: first performing GT augmentation on LiDAR point clouds, then converting them into physically consistent 4D radar tensors via a physics-aware LiDAR-to-4D-Radar Synthesis module (L2RDaS). L2RDaS explicitly models both in-box and out-of-box radar responses—including main lobes and sidelobes—enabling the first GT-Aug formulation tailored to the 4D radar domain. Integrated within an end-to-end 4D tensor modeling and detection training framework, our approach achieves significant improvements in detection accuracy over conventional GT-Aug methods on the K-Radar benchmark. The source code is publicly available.
📝 Abstract
Ground truth augmentation (GT-Aug) is a common method for LiDAR-based object detection, as it enhances object density by leveraging ground truth bounding boxes (GT bboxes). However, directly applying GT-Aug to 4D Radar tensor data overlooks important measurements outside the GT bboxes-such as sidelobes-leading to synthetic distributions that deviate from real-world 4D Radar data. To address this limitation, we propose 4D Radar Ground Truth Augmentation (4DR GT-Aug). Our approach first augments LiDAR data and then converts it to 4D Radar data via a LiDAR-to-4D Radar data synthesis (L2RDaS) module, which explicitly accounts for measurements both inside and outside GT bboxes. In doing so, it produces 4D Radar data distributions that more closely resemble real-world measurements, thereby improving object detection accuracy. Experiments on the K-Radar dataset show that the proposed method achieves improved performance compared to conventional GT-Aug in object detection for 4D Radar. The implementation code is available at https://github.com/kaist-avelab/K-Radar.