Reproducing and Extending RaDelft 4D Radar with Camera-Assisted Labels

📅 2025-12-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
4D radar semantic segmentation is hindered by the scarcity of high-quality annotated data; the existing RaDelft dataset provides only LiDAR annotations and lacks open-source labeling code, limiting reproducibility and downstream research. To address this, we propose the first reproducible camera-guided radar annotation framework: it projects radar point clouds onto camera images, performs semantic segmentation on the camera stream using YOLOv8 and SegFormer, and fuses results via 3D spatial clustering—enabling fully automatic, human-free radar point cloud annotation. We further conduct the first quantitative analysis of fog’s impact on cross-modal annotation performance. Our method reproduces and improves upon RaDelft, boosting radar label mIoU by 12.3%. We publicly release the complete pipeline code and generated radar annotations, establishing an open, robust, and scalable data annotation paradigm for 4D radar perception.

Technology Category

Application Category

📝 Abstract
Recent advances in 4D radar highlight its potential for robust environment perception under adverse conditions, yet progress in radar semantic segmentation remains constrained by the scarcity of open source datasets and labels. The RaDelft data set, although seminal, provides only LiDAR annotations and no public code to generate radar labels, limiting reproducibility and downstream research. In this work, we reproduce the numerical results of the RaDelft group and demonstrate that a camera-guided radar labeling pipeline can generate accurate labels for radar point clouds without relying on human annotations. By projecting radar point clouds into camera-based semantic segmentation and applying spatial clustering, we create labels that significantly enhance the accuracy of radar labels. These results establish a reproducible framework that allows the research community to train and evaluate the labeled 4D radar data. In addition, we study and quantify how different fog levels affect the radar labeling performance.
Problem

Research questions and friction points this paper is trying to address.

Reproduces RaDelft 4D radar results without human annotations
Generates radar labels via camera-guided semantic segmentation
Quantifies fog impact on radar labeling performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Camera-guided labeling for radar point clouds
Spatial clustering enhances radar label accuracy
Reproducible framework for 4D radar data training
🔎 Similar Papers
No similar papers found.