HGS: Hybrid Gaussian Splatting with Static-Dynamic Decomposition for Compact Dynamic View Synthesis

📅 2025-12-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Dynamic novel view synthesis (NVS) suffers from excessive model size and slow rendering, with existing 3D Gaussian Splatting (3DGS)-based approaches introducing redundancy via implicit deformations or global time-varying parameters. This work proposes a hybrid Gaussian rasterization framework featuring the first static-dynamic semantic decoupling: static regions share invariant Gaussian parameters, while dynamic regions employ radial basis functions (RBFs) for explicit spatiotemporal deformation modeling; a two-stage training strategy further enforces temporal boundary consistency. The method reduces model volume by 98%, enabling real-time rendering at 4K resolution and 125 FPS on an RTX 3090, and 1352×1014 at 160 FPS on an RTX 3050. Integrated into a VR system, it achieves significantly improved fidelity in high-frequency details and abrupt dynamic motions.

Technology Category

Application Category

📝 Abstract
Dynamic novel view synthesis (NVS) is essential for creating immersive experiences. Existing approaches have advanced dynamic NVS by introducing 3D Gaussian Splatting (3DGS) with implicit deformation fields or indiscriminately assigned time-varying parameters, surpassing NeRF-based methods. However, due to excessive model complexity and parameter redundancy, they incur large model sizes and slow rendering speeds, making them inefficient for real-time applications, particularly on resource-constrained devices. To obtain a more efficient model with fewer redundant parameters, in this paper, we propose Hybrid Gaussian Splatting (HGS), a compact and efficient framework explicitly designed to disentangle static and dynamic regions of a scene within a unified representation. The core innovation of HGS lies in our Static-Dynamic Decomposition (SDD) strategy, which leverages Radial Basis Function (RBF) modeling for Gaussian primitives. Specifically, for dynamic regions, we employ time-dependent RBFs to effectively capture temporal variations and handle abrupt scene changes, while for static regions, we reduce redundancy by sharing temporally invariant parameters. Additionally, we introduce a two-stage training strategy tailored for explicit models to enhance temporal coherence at static-dynamic boundaries. Experimental results demonstrate that our method reduces model size by up to 98% and achieves real-time rendering at up to 125 FPS at 4K resolution on a single RTX 3090 GPU. It further sustains 160 FPS at 1352 * 1014 on an RTX 3050 and has been integrated into the VR system. Moreover, HGS achieves comparable rendering quality to state-of-the-art methods while providing significantly improved visual fidelity for high-frequency details and abrupt scene changes.
Problem

Research questions and friction points this paper is trying to address.

Reduces model size and parameter redundancy for dynamic view synthesis.
Separates static and dynamic scene parts to improve rendering efficiency.
Enables real-time performance on resource-constrained devices like VR systems.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Static-Dynamic Decomposition disentangles scene regions
Radial Basis Function modeling captures temporal variations
Two-stage training enhances temporal coherence at boundaries
🔎 Similar Papers
No similar papers found.