🤖 AI Summary
In Gaussian splatting-based implicit neural fields, hash grids suffer from substantial storage and transmission redundancy due to sparse and non-uniform 3D Gaussian distributions, resulting in numerous invalid feature entries. To address this, we propose a coordinate-driven structured hash grid pruning method: leveraging Gaussian center coordinates, we design a coordinate-aware validity discrimination mechanism that dynamically truncates sparse hash indices while enabling seamless integration with CTC-compatible encoding. This is the first zero-loss hash grid pruning scheme specifically tailored for Gaussian splatting-based implicit representations. It achieves lossless reconstruction—preserving PSNR and SSIM—while reducing average bitrates by 8% and significantly compressing hash grid memory footprint. The approach jointly optimizes rate-distortion performance and incurs zero computational overhead or quality degradation.
📝 Abstract
Hash grids are widely used to learn an implicit neural field for Gaussian splatting, serving either as part of the entropy model or for inter-frame prediction. However, due to the irregular and non-uniform distribution of Gaussian splats in 3D space, numerous sparse regions exist, rendering many features in the hash grid invalid. This leads to redundant storage and transmission overhead. In this work, we propose a hash grid feature pruning method that identifies and prunes invalid features based on the coordinates of the input Gaussian splats, so that only the valid features are encoded. This approach reduces the storage size of the hash grid without compromising model performance, leading to improved rate-distortion performance. Following the Common Test Conditions (CTC) defined by the standardization committee, our method achieves an average bitrate reduction of 8% compared to the baseline approach.