DAOcc: 3D Object Detection Assisted Multi-Sensor Fusion for 3D Occupancy Prediction

📅 2024-09-30
🏛️ arXiv.org
📈 Citations: 6
Influential: 1
📄 PDF
🤖 AI Summary
Existing 3D semantic occupancy prediction methods rely on high-resolution images and computationally intensive networks, incurring high deployment costs and lacking effective supervision for fused features. To address these limitations, we propose a lightweight, deployment-friendly multi-sensor fusion framework. First, we introduce 3D object detection as an explicit supervisory signal to guide multimodal feature learning in the bird’s-eye view (BEV) space. Second, we design a BEV range expansion strategy to mitigate information loss caused by low-resolution input images (256×704). Third, we adopt a lightweight ResNet-50 backbone for image encoding. Our method achieves state-of-the-art performance on both Occ3D-nuScenes and SurroundOcc benchmarks, significantly outperforming prior approaches. The source code is publicly available.

Technology Category

Application Category

📝 Abstract
Multi-sensor fusion significantly enhances the accuracy and robustness of 3D semantic occupancy prediction, which is crucial for autonomous driving and robotics. However, most existing approaches depend on large image resolutions and complex networks to achieve top performance, hindering their application in practical scenarios. Additionally, most multi-sensor fusion approaches focus on improving fusion features while overlooking the exploration of supervision strategies for these features. To this end, we propose DAOcc, a novel multi-modal occupancy prediction framework that leverages 3D object detection supervision to assist in achieving superior performance, while using a deployment-friendly image feature extraction network and practical input image resolution. Furthermore, we introduce a BEV View Range Extension strategy to mitigate the adverse effects of reduced image resolution. Experimental results show that DAOcc achieves new state-of-the-art performance on the Occ3D-nuScenes and SurroundOcc benchmarks, and surpasses other methods by a significant margin while using only ResNet50 and 256*704 input image resolution. Code will be made available at https://github.com/AlphaPlusTT/DAOcc.
Problem

Research questions and friction points this paper is trying to address.

Improving multi-sensor fusion for 3D occupancy prediction
Addressing deployment limitations of high-resolution complex networks
Enhancing supervision strategies for multi-modal feature fusion
Innovation

Methods, ideas, or system contributions that make the work stand out.

3D object detection supervision for fusion
BEV view range extension strategy
Deployment-friendly ResNet-50 backbone
🔎 Similar Papers
No similar papers found.
Z
Zhen Yang
Beijing Mechanical Equipment Institute, Beijing, China
Y
Yanpeng Dong
Beijing Mechanical Equipment Institute, Beijing, China
H
Heng Wang
Beijing Mechanical Equipment Institute, Beijing, China