🤖 AI Summary
This work addresses the challenges of deploying high-accuracy LiDAR-based 3D object detection on edge devices, including high computational cost, energy consumption, and limited field-of-view from a single perspective. The authors propose a split-computation framework leveraging multi-infrastructure LiDAR cooperative perception: edge devices perform only shallow network inference and transmit intermediate features to an edge server, which fuses multi-source point cloud features to produce the final detection results. By introducing a multi-intermediate-output ensemble mechanism, the approach significantly reduces on-device computation (71.6% lower processing time) and communication latency while maintaining high detection accuracy (accuracy degradation ≤1.09%) and preserving data privacy. Evaluated on real-world datasets, the method achieves a 2.19× end-to-end speedup, demonstrating its effectiveness and practicality.
📝 Abstract
3D object detection using LiDAR-based point cloud data and deep neural networks is essential in autonomous driving technology. However, deploying state-of-the-art models on edge devices present challenges due to high computational demands and energy consumption. Additionally, single LiDAR setups suffer from blind spots. This paper proposes SC-MII, multiple infrastructure LiDAR-based 3D object detection on edge devices for Split Computing with Multiple Intermediate outputs Integration. In SC-MII, edge devices process local point clouds through the initial DNN layers and send intermediate outputs to an edge server. The server integrates these features and completes inference, reducing both latency and device load while improving privacy. Experimental results on a real-world dataset show a 2.19× speed-up and a 71.6% reduction in edge device processing time, with at most a 1.09% drop in accuracy.