NVGS: Neural Visibility for Occlusion Culling in 3D Gaussian Splatting

📅 2025-11-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In 3D Gaussian splatting, opacity-agnostic occlusion culling remains challenging due to the inherent semi-transparency of Gaussians, severely limiting rendering efficiency for complex scenes. To address this, we propose a neural visibility model: a lightweight, shared MLP that learns view-dependent visibility functions per Gaussian primitive, enabling dynamic occlusion culling of semi-transparent Gaussians prior to rasterization. Integrated with a custom-instanced software rasterizer, frustum culling, and Tensor Core acceleration, the pipeline achieves end-to-end optimization. This is the first work to incorporate neural networks into Gaussian splatting occlusion culling, and the first to support view-dependent visibility prediction for semi-transparent primitives—complementary to existing LOD strategies. Experiments demonstrate a 27% reduction in VRAM usage and a 2.1× speedup in rendering, while maintaining state-of-the-art image quality, significantly advancing real-time, high-performance Gaussian splatting rendering.

Technology Category

Application Category

📝 Abstract
3D Gaussian Splatting can exploit frustum culling and level-of-detail strategies to accelerate rendering of scenes containing a large number of primitives. However, the semi-transparent nature of Gaussians prevents the application of another highly effective technique: occlusion culling. We address this limitation by proposing a novel method to learn the viewpoint-dependent visibility function of all Gaussians in a trained model using a small, shared MLP across instances of an asset in a scene. By querying it for Gaussians within the viewing frustum prior to rasterization, our method can discard occluded primitives during rendering. Leveraging Tensor Cores for efficient computation, we integrate these neural queries directly into a novel instanced software rasterizer. Our approach outperforms the current state of the art for composed scenes in terms of VRAM usage and image quality, utilizing a combination of our instanced rasterizer and occlusion culling MLP, and exhibits complementary properties to existing LoD techniques.
Problem

Research questions and friction points this paper is trying to address.

Enables occlusion culling for semi-transparent 3D Gaussians
Learns viewpoint-dependent visibility using shared MLP
Discards occluded primitives before rasterization to optimize rendering
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neural visibility function learned via shared MLP
Instanced software rasterizer leveraging Tensor Cores
Occlusion culling for 3D Gaussian Splatting primitives
🔎 Similar Papers
No similar papers found.
B
Brent Zoomers
Digital Future Lab, Hasselt University, Belgium
Florian Hahlbohm
Florian Hahlbohm
TU Braunschweig
View SynthesisImage-Based RenderingNeural RenderingReal-Time RenderingGaussian Splatting
J
Joni Vanherck
Digital Future Lab, Hasselt University, Belgium
L
Lode Jorissen
Digital Future Lab, Hasselt University, Belgium
Marcus Magnor
Marcus Magnor
Professor of Computer Science, TU Braunschweig, L3S Research Center
graphicsvisionvisual computing3D videopsychophysics
N
Nick Michiels
Digital Future Lab, Hasselt University, Belgium