Fast and Robust Deformable 3D Gaussian Splatting

📅 2026-03-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Dynamic scene reconstruction often suffers from low rendering efficiency, sensitivity to initial point clouds, and susceptibility to local optima under low-light conditions. This work proposes an efficient and robust deformable 3D Gaussian splatting framework that accelerates rendering through early fusion of per-Gaussian features with coarse-to-fine temporal embeddings. To ease the optimization of deformation fields, a novel initialization strategy guided by depth priors and reconstruction errors is introduced. Additionally, opacity modulation is incorporated to mitigate convergence to suboptimal solutions in dark environments. The proposed method achieves state-of-the-art visual quality while significantly improving rendering speed and enhancing robustness to sparse initializations and challenging low-light scenarios.

Technology Category

Application Category

📝 Abstract
3D Gaussian Splatting has demonstrated remarkable real-time rendering capabilities and superior visual quality in novel view synthesis for static scenes. Building upon these advantages, researchers have progressively extended 3D Gaussians to dynamic scene reconstruction. Deformation field-based methods have emerged as a promising approach among various techniques. These methods maintain 3D Gaussian attributes in a canonical field and employ the deformation field to transform this field across temporal sequences. Nevertheless, these approaches frequently encounter challenges such as suboptimal rendering speeds, significant dependence on initial point clouds, and vulnerability to local optima in dim scenes. To overcome these limitations, we present FRoG, an efficient and robust framework for high-quality dynamic scene reconstruction. FRoG integrates per-Gaussian embedding with a coarse-to-fine temporal embedding strategy, accelerating rendering through the early fusion of temporal embeddings. Moreover, to enhance robustness against sparse initializations, we introduce a novel depth- and error-guided sampling strategy. This strategy populates the canonical field with new 3D Gaussians at low-deviation initial positions, significantly reducing the optimization burden on the deformation field and improving detail reconstruction in both static and dynamic regions. Furthermore, by modulating opacity variations, we mitigate the local optima problem in dim scenes, improving color fidelity. Comprehensive experimental results validate that our method achieves accelerated rendering speeds while maintaining state-of-the-art visual quality.
Problem

Research questions and friction points this paper is trying to address.

dynamic scene reconstruction
3D Gaussian Splatting
deformation field
rendering speed
local optima
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deformable 3D Gaussian Splatting
Temporal Embedding
Depth-guided Sampling
Robust Dynamic Reconstruction
Opacity Modulation
🔎 Similar Papers
No similar papers found.
H
Han Jiao
College of Computer Science and Technology, Zhejiang University, China
Jiakai Sun
Jiakai Sun
Zhejiang University
3D VisionRendering
Lei Zhao
Lei Zhao
Zhejiang University
多模态大模型、视频和三维内容生成(AIGC)、虚拟数字人
Zhanjie Zhang
Zhanjie Zhang
Zhejiang University
computer vision
W
Wei Xing
College of Computer Science and Technology, Zhejiang University, China
H
Huaizhong Lin
College of Computer Science and Technology, Zhejiang University, China