DGNS: Deformable Gaussian Splatting and Dynamic Neural Surface for Monocular Dynamic 3D Reconstruction

๐Ÿ“… 2024-12-05
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 2
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses 3D reconstruction and novel-view synthesis from monocular dynamic videos. We propose a hybrid framework that jointly optimizes deformable Gaussian splatting and dynamic neural surfaces. Our key contributions are: (1) a bidirectional coupling mechanism between two modulesโ€”Gaussian splatting provides depth supervision and efficient ray sampling guidance to the neural surface, while the neural surface imposes geometric consistency constraints on Gaussian distributions to enhance rendering fidelity; and (2) rasterized depth map filtering, which significantly improves geometric reconstruction accuracy. Through end-to-end co-optimization, our method simultaneously refines geometry and appearance. Evaluated on standard benchmarks, it achieves state-of-the-art performance in both novel-view synthesis and 3D reconstruction, outperforming existing methods in geometric fidelity and visual realism.

Technology Category

Application Category

๐Ÿ“ Abstract
Dynamic scene reconstruction from monocular video is critical for real-world applications. This paper tackles the dual challenges of dynamic novel-view synthesis and 3D geometry reconstruction by introducing a hybrid framework: Deformable Gaussian Splatting and Dynamic Neural Surfaces (DGNS), in which both modules can leverage each other for both tasks. During training, depth maps generated by the deformable Gaussian splatting module guide the ray sampling for faster processing and provide depth supervision within the dynamic neural surface module to improve geometry reconstruction. Simultaneously, the dynamic neural surface directs the distribution of Gaussian primitives around the surface, enhancing rendering quality. To further refine depth supervision, we introduce a depth-filtering process on depth maps derived from Gaussian rasterization. Extensive experiments on public datasets demonstrate that DGNS achieves state-of-the-art performance in both novel-view synthesis and 3D reconstruction.
Problem

Research questions and friction points this paper is trying to address.

Monocular dynamic 3D reconstruction from video
Simultaneous novel-view synthesis and geometry reconstruction
Improving rendering quality and processing speed
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deformable Gaussian Splatting for depth guidance
Dynamic Neural Surfaces for geometry reconstruction
Depth-filtering approach to refine supervision
๐Ÿ”Ž Similar Papers
No similar papers found.