🤖 AI Summary
To address overfitting and degraded novel-view rendering quality in 3D Gaussian Splatting (3DGS) under sparse-view settings (e.g., three views), this paper proposes a structural regularization method that requires no external priors. The core innovation is the first-ever Gaussian-level dropout mechanism: during training, a subset of Gaussian ellipsoids is randomly dropped, coupled with visibility-weighted optimization and gradient redistribution in the rendering pipeline to enhance the representational robustness and geometric consistency of the remaining Gaussians. The method is lightweight and computationally efficient—introducing zero additional inference or training overhead. Evaluated on standard sparse-view benchmarks, it achieves rendering quality competitive with strong prior-based approaches while significantly improving generalization. Source code and pre-trained models are publicly released.
📝 Abstract
Recently, 3D Gaussian splatting (3DGS) has gained considerable attentions in the field of novel view synthesis due to its fast performance while yielding the excellent image quality. However, 3DGS in sparse-view settings (e.g., three-view inputs) often faces with the problem of overfitting to training views, which significantly drops the visual quality of novel view images. Many existing approaches have tackled this issue by using strong priors, such as 2D generative contextual information and external depth signals. In contrast, this paper introduces a prior-free method, so-called DropGaussian, with simple changes in 3D Gaussian splatting. Specifically, we randomly remove Gaussians during the training process in a similar way of dropout, which allows non-excluded Gaussians to have larger gradients while improving their visibility. This makes the remaining Gaussians to contribute more to the optimization process for rendering with sparse input views. Such simple operation effectively alleviates the overfitting problem and enhances the quality of novel view synthesis. By simply applying DropGaussian to the original 3DGS framework, we can achieve the competitive performance with existing prior-based 3DGS methods in sparse-view settings of benchmark datasets without any additional complexity. The code and model are publicly available at: https://github.com/DCVL-3D/DropGaussian release.