Broadening View Synthesis of Dynamic Scenes from Constrained Monocular Videos

๐Ÿ“… 2025-12-16
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Dynamic Neural Radiance Fields (NeRFs) suffer from geometric inconsistency and rendering instability under large viewpoint deviations. To address this, we propose ExpanDyNeRFโ€”a novel framework that jointly integrates Gaussian rasterization priors with pseudo-ground-truth supervision to enhance geometric consistency and robustness of dynamic radiance field reconstruction. We further introduce SynDM, the first synthetic dynamic multi-view dataset featuring side-view supervision, and develop a custom multi-view rendering pipeline based on GTA V. Our method achieves high-fidelity dynamic scene reconstruction from monocular video input. Extensive experiments demonstrate significant improvements in rendering quality under large-angle rotations on both SynDM and real-world datasets: ExpanDyNeRF outperforms state-of-the-art dynamic NeRF methods in PSNR and SSIM, while delivering markedly enhanced visual stability and fine-detail fidelity.

Technology Category

Application Category

๐Ÿ“ Abstract
In dynamic Neural Radiance Fields (NeRF) systems, state-of-the-art novel view synthesis methods often fail under significant viewpoint deviations, producing unstable and unrealistic renderings. To address this, we introduce Expanded Dynamic NeRF (ExpanDyNeRF), a monocular NeRF framework that leverages Gaussian splatting priors and a pseudo-ground-truth generation strategy to enable realistic synthesis under large-angle rotations. ExpanDyNeRF optimizes density and color features to improve scene reconstruction from challenging perspectives. We also present the Synthetic Dynamic Multiview (SynDM) dataset, the first synthetic multiview dataset for dynamic scenes with explicit side-view supervision-created using a custom GTA V-based rendering pipeline. Quantitative and qualitative results on SynDM and real-world datasets demonstrate that ExpanDyNeRF significantly outperforms existing dynamic NeRF methods in rendering fidelity under extreme viewpoint shifts. Further details are provided in the supplementary materials.
Problem

Research questions and friction points this paper is trying to address.

Improves novel view synthesis under large viewpoint deviations
Addresses unstable renderings in dynamic NeRF from monocular videos
Enhances scene reconstruction from challenging perspectives
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages Gaussian splatting priors for dynamic NeRF
Uses pseudo-ground-truth generation for large-angle rotations
Introduces synthetic multiview dataset with side-view supervision
๐Ÿ”Ž Similar Papers
No similar papers found.
L
Le Jiang
Northeastern University, Boston, MA
S
Shaotong Zhu
Northeastern University, Boston, MA
Y
Yedi Luo
Northeastern University, Boston, MA
S
Shayda Moezzi
Northeastern University, Boston, MA
Sarah Ostadabbas
Sarah Ostadabbas
Electrical & Computer Engineering, Northeastern University
Computer VisionMachine LearningArtificial IntelligenceAugmented Cognition with Medical