๐ค AI Summary
Dynamic Neural Radiance Fields (NeRFs) suffer from geometric inconsistency and rendering instability under large viewpoint deviations. To address this, we propose ExpanDyNeRFโa novel framework that jointly integrates Gaussian rasterization priors with pseudo-ground-truth supervision to enhance geometric consistency and robustness of dynamic radiance field reconstruction. We further introduce SynDM, the first synthetic dynamic multi-view dataset featuring side-view supervision, and develop a custom multi-view rendering pipeline based on GTA V. Our method achieves high-fidelity dynamic scene reconstruction from monocular video input. Extensive experiments demonstrate significant improvements in rendering quality under large-angle rotations on both SynDM and real-world datasets: ExpanDyNeRF outperforms state-of-the-art dynamic NeRF methods in PSNR and SSIM, while delivering markedly enhanced visual stability and fine-detail fidelity.
๐ Abstract
In dynamic Neural Radiance Fields (NeRF) systems, state-of-the-art novel view synthesis methods often fail under significant viewpoint deviations, producing unstable and unrealistic renderings. To address this, we introduce Expanded Dynamic NeRF (ExpanDyNeRF), a monocular NeRF framework that leverages Gaussian splatting priors and a pseudo-ground-truth generation strategy to enable realistic synthesis under large-angle rotations. ExpanDyNeRF optimizes density and color features to improve scene reconstruction from challenging perspectives. We also present the Synthetic Dynamic Multiview (SynDM) dataset, the first synthetic multiview dataset for dynamic scenes with explicit side-view supervision-created using a custom GTA V-based rendering pipeline. Quantitative and qualitative results on SynDM and real-world datasets demonstrate that ExpanDyNeRF significantly outperforms existing dynamic NeRF methods in rendering fidelity under extreme viewpoint shifts. Further details are provided in the supplementary materials.