🤖 AI Summary
This work addresses the high computational and memory demands of 3D semantic occupancy prediction, which hinder real-time deployment in autonomous driving. To tackle this challenge, the authors propose a semantics- and uncertainty-guided sparse learning framework that leverages the inherent sparsity of 3D scenes to significantly reduce redundancy while preserving geometric and semantic completeness. Key innovations include suppressing free-space projections using semantic and uncertainty priors, unsigned distance field encoding, hyper-cross sparse convolutions, a cascaded sparse completion module, and a lightweight OCR mask decoder. Evaluated on SemanticKITTI, the method achieves a 7.34% improvement in accuracy alongside a 57.8% gain in inference efficiency.
📝 Abstract
As autonomous driving moves toward full scene understanding, 3D semantic occupancy prediction has emerged as a crucial perception task, offering voxel-level semantics beyond traditional detection and segmentation paradigms. However, such a refined representation for scene understanding incurs prohibitive computation and memory overhead, posing a major barrier to practical real-time deployment. To address this, we propose SUG-Occ, an explicit Semantics and Uncertainty Guided Sparse Learning Enabled 3D Occupancy Prediction Framework, which exploits the inherent sparsity of 3D scenes to reduce redundant computation while maintaining geometric and semantic completeness. Specifically, we first utilize semantic and uncertainty priors to suppress projections from free space during view transformation while employing an explicit unsigned distance encoding to enhance geometric consistency, producing a structurally consistent sparse 3D representation. Secondly, we design an cascade sparse completion module via hyper cross sparse convolution and generative upsampling to enable efficiently coarse-to-fine reasoning. Finally, we devise an object contextual representation (OCR) based mask decoder that aggregates global semantic context from sparse features and refines voxel-wise predictions via lightweight query-context interactions, avoiding expensive attention operations over volumetric features. Extensive experiments on SemanticKITTI benchmark demonstrate that the proposed approach outperforms the baselines, achieving a 7.34/% improvement in accuracy and a 57.8\% gain in efficiency.