🤖 AI Summary
To address the slow rendering speed and severe model redundancy of 3D Gaussian Splatting (3D-GS) in real-time novel-view synthesis, this paper proposes an end-to-end sparsification framework. Our method introduces a sparse pixel-aware rendering pipeline, a learnable Gaussian primitive pruning mechanism, and jointly incorporates spatially adaptive projection optimization with gradient-aware sparsification. For the first time, it simultaneously achieves model compression, rendering acceleration, and training acceleration—without compromising visual fidelity. Evaluated on Mip-NeRF 360, Tanks & Temples, and Deep Blending benchmarks, our approach maintains state-of-the-art (SOTA) rendering quality while delivering an average 6.71× rendering speedup, reducing Gaussian primitives by 10.6×, and significantly shortening training time.
📝 Abstract
3D Gaussian Splatting (3D-GS) is a recent 3D scene reconstruction technique that enables real-time rendering of novel views by modeling scenes as parametric point clouds of differentiable 3D Gaussians. However, its rendering speed and model size still present bottlenecks, especially in resource-constrained settings. In this paper, we identify and address two key inefficiencies in 3D-GS, achieving substantial improvements in rendering speed, model size, and training time. First, we optimize the rendering pipeline to precisely localize Gaussians in the scene, boosting rendering speed without altering visual fidelity. Second, we introduce a novel pruning technique and integrate it into the training pipeline, significantly reducing model size and training time while further raising rendering speed. Our Speedy-Splat approach combines these techniques to accelerate average rendering speed by a drastic $6.71 imes$ across scenes from the Mip-NeRF 360, Tanks&Temples, and Deep Blending datasets with $10.6 imes$ fewer primitives than 3D-GS.