🤖 AI Summary
3D Gaussian Splatting (3DGS) suffers from limited geometric representational capacity due to its non-normalized modeling and restrictive Gaussian assumption.
Method: This work introduces the first mixture density model based on Student’s t-distribution for neural rendering—explicitly incorporating heavy-tailed distributions into scene reconstruction. We (i) employ a signed density field to jointly model forward splatting and backward “scooping” for geometry-appearance co-optimization; (ii) design an adaptive importance sampling strategy tailored to heavy-tailed densities to ensure training stability; and (iii) integrate differentiable rendering with gradient-driven component pruning.
Results: Our method achieves state-of-the-art performance across multiple benchmarks. At comparable reconstruction quality, it reduces the number of Gaussian primitives by up to 82%, significantly improving parameter efficiency and rendering fidelity.
📝 Abstract
Recently, 3D Gaussian Splatting (3DGS) provides a new framework for novel view synthesis, and has spiked a new wave of research in neural rendering and related applications. As 3DGS is becoming a foundational component of many models, any improvement on 3DGS itself can bring huge benefits. To this end, we aim to improve the fundamental paradigm and formulation of 3DGS. We argue that as an unnormalized mixture model, it needs to be neither Gaussians nor splatting. We subsequently propose a new mixture model consisting of flexible Student's t distributions, with both positive (splatting) and negative (scooping) densities. We name our model Student Splatting and Scooping, or SSS. When providing better expressivity, SSS also poses new challenges in learning. Therefore, we also propose a new principled sampling approach for optimization. Through exhaustive evaluation and comparison, across multiple datasets, settings, and metrics, we demonstrate that SSS outperforms existing methods in terms of quality and parameter efficiency, e.g. achieving matching or better quality with similar numbers of components, and obtaining comparable results while reducing the component number by as much as 82%.