🤖 AI Summary
To address key bottlenecks in 3D multimodal occupancy prediction for autonomous driving—namely, inaccurate depth estimation in camera-based methods and poor occlusion robustness in LiDAR-based approaches—this paper proposes a Semantic-and-Depth Guided (SDG) view transformation framework. The method jointly models pixel-level semantics and point-cloud depth via two lightweight fusion architectures: SDG-Fusion and SDG-KL. An occupancy-driven active distillation strategy is introduced to enhance image semantic representation. Furthermore, depth distribution is modeled using diffusion-based sampling combined with bilinear discretization, improving geometric-semantic joint modeling capability. Evaluated on Occ3D-nuScenes, the method achieves state-of-the-art performance (mIoU = 32.1%) with real-time inference speed (≥25 FPS). Strong generalization is demonstrated on SurroundOcc-nuScenes, validating both effectiveness and practicality.
📝 Abstract
Multimodal 3D occupancy prediction has garnered significant attention for its potential in autonomous driving. However, most existing approaches are single-modality: camera-based methods lack depth information, while LiDAR-based methods struggle with occlusions. Current lightweight methods primarily rely on the Lift-Splat-Shoot (LSS) pipeline, which suffers from inaccurate depth estimation and fails to fully exploit the geometric and semantic information of 3D LiDAR points. Therefore, we propose a novel multimodal occupancy prediction network called SDG-OCC, which incorporates a joint semantic and depth-guided view transformation coupled with a fusion-to-occupancy-driven active distillation. The enhanced view transformation constructs accurate depth distributions by integrating pixel semantics and co-point depth through diffusion and bilinear discretization. The fusion-to-occupancy-driven active distillation extracts rich semantic information from multimodal data and selectively transfers knowledge to image features based on LiDAR-identified regions. Finally, for optimal performance, we introduce SDG-Fusion, which uses fusion alone, and SDG-KL, which integrates both fusion and distillation for faster inference. Our method achieves state-of-the-art (SOTA) performance with real-time processing on the Occ3D-nuScenes dataset and shows comparable performance on the more challenging SurroundOcc-nuScenes dataset, demonstrating its effectiveness and robustness. The code will be released at https://github.com/DzpLab/SDGOCC.