Efficient On-Chip Implementation of 4D Radar-Based 3D Object Detection on Hailo-8L

📅 2025-05-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of deploying 4D radar-based 3D object detection models on low-power embedded platforms—specifically the Hailo-8L accelerator, which lacks native support for 5D tensors—we propose a compile-time tensor reshaping technique. This method losslessly maps 5D radar voxel inputs into a 4D format, circumventing hardware-imposed dimensional constraints without altering the network architecture. Leveraging a standard 3D CNN backbone, we integrate customized compiler optimizations and hardware-aware co-adaptation tailored to the Hailo-8L. The resulting system enables efficient, real-time inference at the edge. Experimental evaluation demonstrates strong robustness under adverse weather conditions, achieving 46.47% AP₃D and 52.75% AP<sub>BEV</sub>—comparable to GPU-based counterparts—while sustaining a throughput of 13.76 Hz. To our knowledge, this is the first 4D radar 3D detection system deployed on the Hailo-8L, establishing a viable pathway for edge-deployable autonomous driving perception.

Technology Category

Application Category

📝 Abstract
4D radar has attracted attention in autonomous driving due to its ability to enable robust 3D object detection even under adverse weather conditions. To practically deploy such technologies, it is essential to achieve real-time processing within low-power embedded environments. Addressing this, we present the first on-chip implementation of a 4D radar-based 3D object detection model on the Hailo-8L AI accelerator. Although conventional 3D convolutional neural network (CNN) architectures require 5D inputs, the Hailo-8L only supports 4D tensors, posing a significant challenge. To overcome this limitation, we introduce a tensor transformation method that reshapes 5D inputs into 4D formats during the compilation process, enabling direct deployment without altering the model structure. The proposed system achieves 46.47% AP_3D and 52.75% AP_BEV, maintaining comparable accuracy to GPU-based models while achieving an inference speed of 13.76 Hz. These results demonstrate the applicability of 4D radar-based perception technologies to autonomous driving systems.
Problem

Research questions and friction points this paper is trying to address.

Enable real-time 4D radar processing in low-power embedded systems
Overcome 5D-to-4D tensor conversion for on-chip 3D object detection
Maintain accuracy while optimizing inference speed for autonomous driving
Innovation

Methods, ideas, or system contributions that make the work stand out.

On-chip 4D radar 3D detection on Hailo-8L
Tensor transformation for 5D to 4D conversion
Real-time low-power embedded radar processing
🔎 Similar Papers
No similar papers found.
W
Woong-Chan Byun
M.S. student, CCS Graduate School of Mobility, KAIST
Dong-Hee Paek
Dong-Hee Paek
KAIST
4D RadarLiDARCameraSensor Fusion
S
Seung-Hyun Song
M.S. student, Graduate School of Advanced Security Science and Technology, KAIST
S
Seung-Hyun Kong
Professor, CCS Graduate School of Mobility, KAIST