4D Neural Voxel Splatting: Dynamic Scene Rendering with Voxelized Guassian Splatting

📅 2025-11-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high memory consumption and slow training of 3D Gaussian Splatting (3D-GS) in dynamic scenes—caused by frame-wise Gaussian duplication—this paper proposes Neural Voxelized Gaussian Splatting (NV-GS). NV-GS employs a compact neural voxel representation to encode scene geometry and appearance, coupled with a learnable spatiotemporal deformation field to model dynamics, thereby eliminating redundant inter-frame Gaussian replication. It further introduces a view-aware refinement mechanism that adaptively optimizes challenging views via gradient-driven updates. The method integrates differentiable rendering, joint voxel-Gaussian optimization, and spatiotemporal regularization. Evaluated on multiple dynamic datasets, NV-GS achieves superior rendering quality over state-of-the-art methods while reducing memory usage by 42% on average and accelerating training by 2.1×. Moreover, it enables real-time novel-view synthesis.

Technology Category

Application Category

📝 Abstract
Although 3D Gaussian Splatting (3D-GS) achieves efficient rendering for novel view synthesis, extending it to dynamic scenes still results in substantial memory overhead from replicating Gaussians across frames. To address this challenge, we propose 4D Neural Voxel Splatting (4D-NVS), which combines voxel-based representations with neural Gaussian splatting for efficient dynamic scene modeling. Instead of generating separate Gaussian sets per timestamp, our method employs a compact set of neural voxels with learned deformation fields to model temporal dynamics. The design greatly reduces memory consumption and accelerates training while preserving high image quality. We further introduce a novel view refinement stage that selectively improves challenging viewpoints through targeted optimization, maintaining global efficiency while enhancing rendering quality for difficult viewing angles. Experiments demonstrate that our method outperforms state-of-the-art approaches with significant memory reduction and faster training, enabling real-time rendering with superior visual fidelity.
Problem

Research questions and friction points this paper is trying to address.

Reducing memory overhead in dynamic scene rendering
Modeling temporal dynamics with compact neural voxels
Enhancing rendering quality for challenging viewpoints
Innovation

Methods, ideas, or system contributions that make the work stand out.

Voxel-based representations combined with neural Gaussian splatting
Neural voxels with learned deformation fields model dynamics
Novel view refinement stage enhances challenging viewpoints
🔎 Similar Papers
No similar papers found.