SDGOCC: Semantic and Depth-Guided Bird's-Eye View Transformation for 3D Multimodal Occupancy Prediction

📅 2025-07-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address key bottlenecks in 3D multimodal occupancy prediction for autonomous driving—namely, inaccurate depth estimation in camera-based methods and poor occlusion robustness in LiDAR-based approaches—this paper proposes a Semantic-and-Depth Guided (SDG) view transformation framework. The method jointly models pixel-level semantics and point-cloud depth via two lightweight fusion architectures: SDG-Fusion and SDG-KL. An occupancy-driven active distillation strategy is introduced to enhance image semantic representation. Furthermore, depth distribution is modeled using diffusion-based sampling combined with bilinear discretization, improving geometric-semantic joint modeling capability. Evaluated on Occ3D-nuScenes, the method achieves state-of-the-art performance (mIoU = 32.1%) with real-time inference speed (≥25 FPS). Strong generalization is demonstrated on SurroundOcc-nuScenes, validating both effectiveness and practicality.

Technology Category

Application Category

📝 Abstract
Multimodal 3D occupancy prediction has garnered significant attention for its potential in autonomous driving. However, most existing approaches are single-modality: camera-based methods lack depth information, while LiDAR-based methods struggle with occlusions. Current lightweight methods primarily rely on the Lift-Splat-Shoot (LSS) pipeline, which suffers from inaccurate depth estimation and fails to fully exploit the geometric and semantic information of 3D LiDAR points. Therefore, we propose a novel multimodal occupancy prediction network called SDG-OCC, which incorporates a joint semantic and depth-guided view transformation coupled with a fusion-to-occupancy-driven active distillation. The enhanced view transformation constructs accurate depth distributions by integrating pixel semantics and co-point depth through diffusion and bilinear discretization. The fusion-to-occupancy-driven active distillation extracts rich semantic information from multimodal data and selectively transfers knowledge to image features based on LiDAR-identified regions. Finally, for optimal performance, we introduce SDG-Fusion, which uses fusion alone, and SDG-KL, which integrates both fusion and distillation for faster inference. Our method achieves state-of-the-art (SOTA) performance with real-time processing on the Occ3D-nuScenes dataset and shows comparable performance on the more challenging SurroundOcc-nuScenes dataset, demonstrating its effectiveness and robustness. The code will be released at https://github.com/DzpLab/SDGOCC.
Problem

Research questions and friction points this paper is trying to address.

Improves 3D occupancy prediction by integrating semantic and depth data
Addresses inaccurate depth estimation in camera-based autonomous driving systems
Enhances multimodal fusion with active distillation for better performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Semantic and depth-guided view transformation
Fusion-to-occupancy-driven active distillation
Diffusion and bilinear discretization integration
🔎 Similar Papers
No similar papers found.
Z
Zaipeng Duan
Huazhong University of Science and Technology
Chenxu Dang
Chenxu Dang
Huazhong University of Science and Technology
Computer VisionAutonomous Driving
X
Xuzhong Hu
Huazhong University of Science and Technology
P
Pei An
Huazhong University of Science and Technology
J
Junfeng Ding
Huazhong University of Science and Technology
J
Jie Zhan
Huazhong University of Science and Technology
Y
Yunbiao Xu
Huazhong University of Science and Technology
J
Jie Ma
Huazhong University of Science and Technology