🤖 AI Summary
To address the challenge of real-time rendering of 3D Gaussian Splatting (3DGS) on memory-constrained devices in large-scale scenes, this paper proposes a Hierarchical Level-of-Detail (LOD) representation framework. Methodologically, it introduces the first joint paradigm for LOD construction combining depth-aware 3D smoothing filtering with importance-driven pruning; designs an opacity-blending boundary fusion mechanism to eliminate visual artifacts from tiled loading; and integrates spatial tiling indexing with dynamic GPU memory loading for efficient rendering. Evaluated on the Hierarchical 3DGS and Zip-NeRF datasets, the method achieves state-of-the-art performance: reducing rendering latency by 37%, decreasing GPU memory consumption by 42%, and maintaining high visual fidelity.
📝 Abstract
In this work, we present a novel level-of-detail (LOD) method for 3D Gaussian Splatting that enables real-time rendering of large-scale scenes on memory-constrained devices. Our approach introduces a hierarchical LOD representation that iteratively selects optimal subsets of Gaussians based on camera distance, thus largely reducing both rendering time and GPU memory usage. We construct each LOD level by applying a depth-aware 3D smoothing filter, followed by importance-based pruning and fine-tuning to maintain visual fidelity. To further reduce memory overhead, we partition the scene into spatial chunks and dynamically load only relevant Gaussians during rendering, employing an opacity-blending mechanism to avoid visual artifacts at chunk boundaries. Our method achieves state-of-the-art performance on both outdoor (Hierarchical 3DGS) and indoor (Zip-NeRF) datasets, delivering high-quality renderings with reduced latency and memory requirements.