Hybrid 3D-4D Gaussian Splatting for Fast Dynamic Scene Representation

📅 2025-05-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the excessive computational and memory overheads, as well as degraded image quality, caused by redundant static-region modeling in 4D Gaussian Splatting (4DGS) for dynamic 3D reconstruction, this paper proposes a hybrid 3D–4D Gaussian Splatting framework. Our method introduces two key innovations: (i) the first differentiable co-optimization of 3D and 4D Gaussians with dynamic dimensionality switching—leveraging differentiable spatiotemporal modeling and adaptive Gaussian degradation to automatically downgrade static regions to 3D Gaussians while preserving dynamic regions as 4D Gaussians; and (ii) iterative pruning of redundant Gaussians. Experiments demonstrate that our approach reduces model parameters by 42% on average and training time by 58% on average, while maintaining or surpassing state-of-the-art methods in rendering quality and temporal consistency.

Technology Category

Application Category

📝 Abstract
Recent advancements in dynamic 3D scene reconstruction have shown promising results, enabling high-fidelity 3D novel view synthesis with improved temporal consistency. Among these, 4D Gaussian Splatting (4DGS) has emerged as an appealing approach due to its ability to model high-fidelity spatial and temporal variations. However, existing methods suffer from substantial computational and memory overhead due to the redundant allocation of 4D Gaussians to static regions, which can also degrade image quality. In this work, we introduce hybrid 3D-4D Gaussian Splatting (3D-4DGS), a novel framework that adaptively represents static regions with 3D Gaussians while reserving 4D Gaussians for dynamic elements. Our method begins with a fully 4D Gaussian representation and iteratively converts temporally invariant Gaussians into 3D, significantly reducing the number of parameters and improving computational efficiency. Meanwhile, dynamic Gaussians retain their full 4D representation, capturing complex motions with high fidelity. Our approach achieves significantly faster training times compared to baseline 4D Gaussian Splatting methods while maintaining or improving the visual quality.
Problem

Research questions and friction points this paper is trying to address.

Reduces computational overhead in dynamic 3D scene reconstruction
Optimizes memory usage by hybrid 3D-4D Gaussian representation
Improves training speed without sacrificing visual quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid 3D-4D Gaussian Splatting framework
Adaptive 3D Gaussians for static regions
4D Gaussians reserved for dynamic elements