🤖 AI Summary
Existing 3D Gaussian Splatting (3DGS) methods struggle to achieve continuous levels of detail (LoD) and flexible fidelity control while preserving full-capacity rendering quality. This work proposes a novel training framework that learns an ordered set of Gaussians, enabling any prefix subset to produce coherent reconstructions with smoothly increasing quality as the computational budget grows. The approach employs a stochastic budget training strategy, randomly sampling the number of Gaussians per iteration and jointly optimizing both prefix subsets and the full set with only two forward passes, without altering the original architecture. Experiments demonstrate that the method outperforms six baselines across four benchmarks, achieving state-of-the-art continuous speed–quality trade-offs within a single model while maintaining the backbone’s full-capacity performance.
📝 Abstract
The ability to render scenes at adjustable fidelity from a single model, known as level of detail (LoD), is crucial for practical deployment of 3D Gaussian Splatting (3DGS). Existing discrete LoD methods expose only a limited set of operating points, while concurrent continuous LoD approaches enable smoother scaling but often suffer noticeable quality degradation at full capacity, making LoD a costly design decision. We introduce Matryoshka Gaussian Splatting (MGS), a training framework that enables continuous LoD for standard 3DGS pipelines without sacrificing full-capacity rendering quality. MGS learns a single ordered set of Gaussians such that rendering any prefix, the first k splats, produces a coherent reconstruction whose fidelity improves smoothly with increasing budget. Our key idea is stochastic budget training: each iteration samples a random splat budget and optimises both the corresponding prefix and the full set. This strategy requires only two forward passes and introduces no architectural modifications. Experiments across four benchmarks and six baselines show that MGS matches the full-capacity performance of its backbone while enabling a continuous speed-quality trade-off from a single model. Extensive ablations on ordering strategies, training objectives, and model capacity further validate the designs.