SkipGS: Post-Densification Backward Skipping for Efficient 3DGS Training

📅 2026-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inefficiency of 3D Gaussian Splatting (3DGS) during its post-densification phase, where training is time-consuming and backpropagation exhibits high redundancy. The authors propose SkipGS, the first method to identify and exploit gradient update redundancy in this stage. SkipGS introduces a plug-and-play, view-adaptive backpropagation gating mechanism that requires no modifications to the renderer or scene representation. By leveraging per-view loss statistics, it selectively skips backpropagation steps yielding minimal improvement, while enforcing a minimum backpropagation budget to ensure optimization stability. Evaluated on the Mip-NeRF 360 dataset, SkipGS reduces end-to-end training time by 23.1% and accelerates the post-densification phase by 42.0%, all while maintaining reconstruction quality comparable to the original 3DGS.

Technology Category

Application Category

📝 Abstract
3D Gaussian Splatting (3DGS) achieves real-time novel-view synthesis by optimizing millions of anisotropic Gaussians, yet its training remains expensive, with the backward pass dominating runtime in the post-densification refinement phase. We observe substantial update redundancy in this phase: many sampled views have near-plateaued losses and provide diminishing gradient benefits, but standard training still runs full backpropagation. We propose SkipGS with a novel view-adaptive backward gating mechanism for efficient post-densification training. SkipGS always performs the forward pass to update per-view loss statistics, and selectively skips backward passes when the sampled view's loss is consistent with its recent per-view baseline, while enforcing a minimum backward budget for stable optimization. On Mip-NeRF 360, compared to 3DGS, SkipGS reduces end-to-end training time by 23.1%, driven by a 42.0% reduction in post-densification time, with comparable reconstruction quality. Because it only changes when to backpropagate -- without modifying the renderer, representation, or loss -- SkipGS is plug-and-play and compatible with other complementary efficiency strategies for additive speedups.
Problem

Research questions and friction points this paper is trying to address.

3D Gaussian Splatting
training efficiency
backward pass
post-densification
gradient redundancy
Innovation

Methods, ideas, or system contributions that make the work stand out.

backward skipping
view-adaptive gating
3D Gaussian Splatting
training efficiency
post-densification
🔎 Similar Papers
No similar papers found.