🤖 AI Summary
To address inefficiency and artifact generation caused by random Gaussian primitive initialization in monocular 3D Semantic Scene Completion (SSC), this paper proposes a depth-guided Gaussian initialization and decoupled aggregation framework. Methodologically: (1) a depth-guided grouped multi-scale fusion module generates geometry-aware initial Gaussians; (2) a decoupled Gaussian aggregator separates geometric and semantic prediction branches to suppress outlier noise; (3) a Gaussian splatting-to-voxel mapping mechanism is integrated with a probabilistic scale loss. Evaluated on Occ-ScanNet, the method achieves state-of-the-art performance: +6.3% IoU and +4.1% mIoU over prior work. Moreover, inference latency and memory consumption are reduced by over 9.3%, significantly improving both computational efficiency and robustness.
📝 Abstract
Monocular 3D Semantic Scene Completion (SSC) is a challenging yet promising task that aims to infer dense geometric and semantic descriptions of a scene from a single image. While recent object-centric paradigms significantly improve efficiency by leveraging flexible 3D Gaussian primitives, they still rely heavily on a large number of randomly initialized primitives, which inevitably leads to 1) inefficient primitive initialization and 2) outlier primitives that introduce erroneous artifacts. In this paper, we propose SplatSSC, a novel framework that resolves these limitations with a depth-guided initialization strategy and a principled Gaussian aggregator. Instead of random initialization, SplatSSC utilizes a dedicated depth branch composed of a Group-wise Multi-scale Fusion (GMF) module, which integrates multi-scale image and depth features to generate a sparse yet representative set of initial Gaussian primitives. To mitigate noise from outlier primitives, we develop the Decoupled Gaussian Aggregator (DGA), which enhances robustness by decomposing geometric and semantic predictions during the Gaussian-to-voxel splatting process. Complemented with a specialized Probability Scale Loss, our method achieves state-of-the-art performance on the Occ-ScanNet dataset, outperforming prior approaches by over 6.3% in IoU and 4.1% in mIoU, while reducing both latency and memory consumption by more than 9.3%. The code will be released upon acceptance.