NeuralGS: Bridging Neural Fields and 3D Gaussian Splatting for Compact 3D Representations

๐Ÿ“… 2025-03-29
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the large model size and high storage/transfer overhead of 3D Gaussian Splatting (3DGS), this paper proposes a lightweight compression method that avoids voxel-based scaffolds and complex quantization strategies. The core innovation is the first direct integration of neural field principles into native 3DGS compression: importance-weighted clustering reduces the number of Gaussians, while multiple compact MLPs jointly and implicitly encode their positions, covariances, and opacitiesโ€”enabling end-to-end differentiable optimization. By eliminating traditional scaffold dependencies and redundant encoding, the method significantly improves compression efficiency. Evaluated on multiple standard benchmarks, it achieves an average 45ร— model compression ratio with zero rendering quality degradation, matching the performance of the state-of-the-art specialized compression framework, Scaffold-GS.

Technology Category

Application Category

๐Ÿ“ Abstract
3D Gaussian Splatting (3DGS) demonstrates superior quality and rendering speed, but with millions of 3D Gaussians and significant storage and transmission costs. Recent 3DGS compression methods mainly concentrate on compressing Scaffold-GS, achieving impressive performance but with an additional voxel structure and a complex encoding and quantization strategy. In this paper, we aim to develop a simple yet effective method called NeuralGS that explores in another way to compress the original 3DGS into a compact representation without the voxel structure and complex quantization strategies. Our observation is that neural fields like NeRF can represent complex 3D scenes with Multi-Layer Perceptron (MLP) neural networks using only a few megabytes. Thus, NeuralGS effectively adopts the neural field representation to encode the attributes of 3D Gaussians with MLPs, only requiring a small storage size even for a large-scale scene. To achieve this, we adopt a clustering strategy and fit the Gaussians with different tiny MLPs for each cluster, based on importance scores of Gaussians as fitting weights. We experiment on multiple datasets, achieving a 45-times average model size reduction without harming the visual quality. The compression performance of our method on original 3DGS is comparable to the dedicated Scaffold-GS-based compression methods, which demonstrate the huge potential of directly compressing original 3DGS with neural fields.
Problem

Research questions and friction points this paper is trying to address.

Compress 3DGS without voxel structure or complex quantization.
Use neural fields to encode 3D Gaussian attributes compactly.
Achieve significant model size reduction without visual quality loss.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses MLPs to encode 3D Gaussian attributes
Applies clustering strategy with importance scores
Reduces model size 45x without quality loss
๐Ÿ”Ž Similar Papers
No similar papers found.