π€ AI Summary
Existing radiance field methods achieve high rendering quality but suffer from substantial computational overhead, whereas Gaussian splatting enables real-time rendering yet lacks optimization robustness in complex scenes. This paper proposes a novel radiance-field-guided Gaussian splatting optimization paradigm: leveraging a pre-trained radiance field as a structural prior to supervise adaptive Gaussian point cloud optimization; incorporating dynamic point cloud pruning and test-time spatial filtering to achieve compact representation and scalable rendering; and integrating rasterization acceleration for enhanced inference efficiency. Evaluated on challenging outdoor and room-scale scenes, our method achieves over 900 FPS real-time rendering, surpassing state-of-the-art methods in PSNR and SSIM, while significantly reducing GPU memory consumption and FLOPsβthereby unifying high fidelity, robust optimization, and computational efficiency.
π Abstract
Recent advances in view synthesis and real-time rendering have achieved photorealistic quality at impressive rendering speeds. While Radiance Field-based methods achieve state-of-the-art quality in challenging scenarios such as in-the-wild captures and large-scale scenes, they often suffer from excessively high compute requirements linked to volumetric rendering. Gaussian Splatting-based methods, on the other hand, rely on rasterization and naturally achieve real-time rendering but suffer from brittle optimization heuristics that underperform on more challenging scenes. In this work, we present RadSplat, a lightweight method for robust real-time rendering of complex scenes. Our main contributions are threefold. First, we use radiance fields as a prior and supervision signal for optimizing point-based scene representations, leading to improved quality and more robust optimization. Next, we develop a novel pruning technique reducing the overall point count while maintaining high quality, leading to smaller and more compact scene representations with faster inference speeds. Finally, we propose a novel test-time filtering approach that further accelerates rendering and allows to scale to larger, house-sized scenes. We find that our method enables state-of-the-art synthesis of complex captures at 900+ FPS.