FastGS: Training 3D Gaussian Splatting in 100 Seconds

πŸ“… 2025-11-06
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the inefficiency and redundant computation in 3D Gaussian Splatting (3DGS) training caused by static, pre-defined Gaussian budgets, this paper proposes FastGSβ€”a fast training framework guided by multi-view consistency. Instead of relying on fixed budget constraints, FastGS employs multi-view consistency as a unified, unsupervised signal to dynamically drive adaptive densification and pruning of Gaussians during training, enabling geometry-aware and rendering-fidelity-preserving point-cloud optimization. This approach significantly improves training throughput without compromising visual quality. On the Mip-NeRF 360 and Deep Blending benchmarks, FastGS achieves speedups of 3.32Γ— and 15.45Γ—, respectively; in cross-dataset generalization tests, it consistently delivers 2–7Γ— acceleration while maintaining rendering quality comparable to the original 3DGS method.

Technology Category

Application Category

πŸ“ Abstract
The dominant 3D Gaussian splatting (3DGS) acceleration methods fail to properly regulate the number of Gaussians during training, causing redundant computational time overhead. In this paper, we propose FastGS, a novel, simple, and general acceleration framework that fully considers the importance of each Gaussian based on multi-view consistency, efficiently solving the trade-off between training time and rendering quality. We innovatively design a densification and pruning strategy based on multi-view consistency, dispensing with the budgeting mechanism. Extensive experiments on Mip-NeRF 360, Tanks&Temples, and Deep Blending datasets demonstrate that our method significantly outperforms the state-of-the-art methods in training speed, achieving a 3.32$ imes$ training acceleration and comparable rendering quality compared with DashGaussian on the Mip-NeRF 360 dataset and a 15.45$ imes$ acceleration compared with vanilla 3DGS on the Deep Blending dataset. We demonstrate that FastGS exhibits strong generality, delivering 2-7$ imes$ training acceleration across various tasks, including dynamic scene reconstruction, surface reconstruction, sparse-view reconstruction, large-scale reconstruction, and simultaneous localization and mapping. The project page is available at https://fastgs.github.io/
Problem

Research questions and friction points this paper is trying to address.

Regulating Gaussian numbers in 3DGS training to reduce computational overhead
Solving training time versus rendering quality trade-off in 3D reconstruction
Developing efficient acceleration framework for various 3D scene reconstruction tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses multi-view consistency for Gaussian importance evaluation
Implements densification and pruning without budgeting mechanism
Achieves 3-15x acceleration across diverse reconstruction tasks
πŸ”Ž Similar Papers
No similar papers found.