Matryoshka Gaussian Splatting

📅 2026-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing 3D Gaussian Splatting (3DGS) methods struggle to achieve continuous levels of detail (LoD) and flexible fidelity control while preserving full-capacity rendering quality. This work proposes a novel training framework that learns an ordered set of Gaussians, enabling any prefix subset to produce coherent reconstructions with smoothly increasing quality as the computational budget grows. The approach employs a stochastic budget training strategy, randomly sampling the number of Gaussians per iteration and jointly optimizing both prefix subsets and the full set with only two forward passes, without altering the original architecture. Experiments demonstrate that the method outperforms six baselines across four benchmarks, achieving state-of-the-art continuous speed–quality trade-offs within a single model while maintaining the backbone’s full-capacity performance.

Technology Category

Application Category

📝 Abstract
The ability to render scenes at adjustable fidelity from a single model, known as level of detail (LoD), is crucial for practical deployment of 3D Gaussian Splatting (3DGS). Existing discrete LoD methods expose only a limited set of operating points, while concurrent continuous LoD approaches enable smoother scaling but often suffer noticeable quality degradation at full capacity, making LoD a costly design decision. We introduce Matryoshka Gaussian Splatting (MGS), a training framework that enables continuous LoD for standard 3DGS pipelines without sacrificing full-capacity rendering quality. MGS learns a single ordered set of Gaussians such that rendering any prefix, the first k splats, produces a coherent reconstruction whose fidelity improves smoothly with increasing budget. Our key idea is stochastic budget training: each iteration samples a random splat budget and optimises both the corresponding prefix and the full set. This strategy requires only two forward passes and introduces no architectural modifications. Experiments across four benchmarks and six baselines show that MGS matches the full-capacity performance of its backbone while enabling a continuous speed-quality trade-off from a single model. Extensive ablations on ordering strategies, training objectives, and model capacity further validate the designs.
Problem

Research questions and friction points this paper is trying to address.

Level of Detail
3D Gaussian Splatting
continuous LoD
rendering fidelity
adaptive rendering
Innovation

Methods, ideas, or system contributions that make the work stand out.

Matryoshka Gaussian Splatting
continuous Level of Detail
stochastic budget training
3D Gaussian Splatting
adaptive rendering
🔎 Similar Papers
No similar papers found.
Z
Zhilin Guo
University of Cambridge
B
Boqiao Zhang
University of Cambridge
Hakan Aktas
Hakan Aktas
University of Cambridge
RoboticsDeep LearningAI
Kyle Fogarty
Kyle Fogarty
University of Cambridge
Geometry ProcessingGeometric Deep LearningApplied Mathematics
J
Jeffrey Hu
University of Cambridge
N
Nursena Koprucu Aslan
University of Cambridge
W
Wenzhao Li
University of Cambridge
C
Canberk Baykal
University of Cambridge
A
Albert Miao
University of Cambridge
Josef Bengtson
Josef Bengtson
PhD student, Chalmers University of Technology
Computer VisionMachine Learning
Chenliang Zhou
Chenliang Zhou
University of Cambridge
machine learninggenerative artificial intelligencecomputer visioncomputer graphics
W
Weihao Xia
University of Cambridge
C
Cristina Nader Vasconcelos
Google
C
Cengiz Oztireli
University of Cambridge