🤖 AI Summary
To address overfitting in 3D Gaussian Splatting (3DGS) under sparse-view settings—caused by uniform rendering weights—this paper proposes an uncertainty-guided adaptive Gaussian weighting framework. The method introduces: (1) a learnable uncertainty field that dynamically modulates per-Gaussian rendering weights; (2) an uncertainty-driven differentiable opacity optimization mechanism; and (3) a soft, differentiable dropout regularization, which maps uncertainty to continuous dropout probabilities to enhance generalization. Integrated seamlessly into the standard 3DGS pipeline, the approach requires no additional supervision. Extensive experiments on benchmark datasets—including MipNeRF 360—demonstrate state-of-the-art performance: it outperforms DropGaussian by +3.27 dB in PSNR while achieving superior reconstruction quality with fewer Gaussians.
📝 Abstract
3D Gaussian Splatting (3DGS) has become a competitive approach for novel view synthesis (NVS) due to its advanced rendering efficiency through 3D Gaussian projection and blending. However, Gaussians are treated equally weighted for rendering in most 3DGS methods, making them prone to overfitting, which is particularly the case in sparse-view scenarios. To address this, we investigate how adaptive weighting of Gaussians affects rendering quality, which is characterised by learned uncertainties proposed. This learned uncertainty serves two key purposes: first, it guides the differentiable update of Gaussian opacity while preserving the 3DGS pipeline integrity; second, the uncertainty undergoes soft differentiable dropout regularisation, which strategically transforms the original uncertainty into continuous drop probabilities that govern the final Gaussian projection and blending process for rendering. Extensive experimental results over widely adopted datasets demonstrate that our method outperforms rivals in sparse-view 3D synthesis, achieving higher quality reconstruction with fewer Gaussians in most datasets compared to existing sparse-view approaches, e.g., compared to DropGaussian, our method achieves 3.27% PSNR improvements on the MipNeRF 360 dataset.