SD-GS: Structured Deformable 3D Gaussians for Efficient Dynamic Scene Reconstruction

📅 2025-07-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing 4D Gaussian-based methods achieve high rendering quality and speed but suffer from excessive memory consumption and limited capacity to model complex physical motion. To address these limitations, we propose a structured deformable 3D Gaussian representation: (1) a hierarchical deformable anchor grid is constructed to generate local spatiotemporal Gaussian distributions; (2) a deformation-aware densification strategy is introduced to enable adaptive growth in dynamic regions while suppressing redundancy in static ones. Our approach preserves—and often enhances—visual fidelity while significantly improving modeling efficiency and compactness. Experimental results demonstrate that, compared to state-of-the-art methods, our model reduces storage footprint by 60% on average and doubles rendering frame rate (100% improvement), effectively balancing high-fidelity reconstruction with lightweight deployment requirements.

Technology Category

Application Category

📝 Abstract
Current 4D Gaussian frameworks for dynamic scene reconstruction deliver impressive visual fidelity and rendering speed, however, the inherent trade-off between storage costs and the ability to characterize complex physical motions significantly limits the practical application of these methods. To tackle these problems, we propose SD-GS, a compact and efficient dynamic Gaussian splatting framework for complex dynamic scene reconstruction, featuring two key contributions. First, we introduce a deformable anchor grid, a hierarchical and memory-efficient scene representation where each anchor point derives multiple 3D Gaussians in its local spatiotemporal region and serves as the geometric backbone of the 3D scene. Second, to enhance modeling capability for complex motions, we present a deformation-aware densification strategy that adaptively grows anchors in under-reconstructed high-dynamic regions while reducing redundancy in static areas, achieving superior visual quality with fewer anchors. Experimental results demonstrate that, compared to state-of-the-art methods, SD-GS achieves an average of 60% reduction in model size and an average of 100% improvement in FPS, significantly enhancing computational efficiency while maintaining or even surpassing visual quality.
Problem

Research questions and friction points this paper is trying to address.

Balancing storage costs and motion complexity in dynamic scenes
Enhancing reconstruction efficiency for complex physical motions
Reducing redundancy while maintaining high visual quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deformable anchor grid for efficient representation
Deformation-aware densification for complex motions
60% smaller model with 100% faster FPS
🔎 Similar Papers
No similar papers found.
W
Wei Yao
SIGS, Tsinghua University
Shuzhao Xie
Shuzhao Xie
Tsinghua University
GraphicsMultimedia
Letian Li
Letian Li
SIGS, Tsinghua University
Weixiang Zhang
Weixiang Zhang
Tsinghua University
Neural Representation3D Computer Vision
Zhixin Lai
Zhixin Lai
Google
S
Shiqi Dai
Department of CST, Tsinghua University
K
Ke Zhang
Soochow University
Z
Zhi Wang
SIGS, Tsinghua University