🤖 AI Summary
Existing dynamic novel view synthesis (NVS) methods are highly sensitive to motion blur, leading to severe degradation in dynamic object reconstruction quality. To address this, we propose the first end-to-end dynamic NVS framework tailored for blurry monocular videos. Our approach introduces three key innovations: (1) Blur-Adaptive Implicit Camera Trajectory Estimation (BLCE), which jointly models global camera motion and local object motion; (2) Physics-Inspired Implicit Camera-Induced Exposure Estimation (LCEE), enabling differentiable modeling of the exposure process; and (3) a co-optimization architecture based on dynamic 3D Gaussian splatting, integrating implicit motion modeling, exposure physics constraints, and temporal consistency regularization. Evaluated on the Stereo Blur dataset and real-world blurry videos, our method significantly outperforms DyBluRF and Deblur4DGS, achieving state-of-the-art performance for dynamic NVS under motion blur.
📝 Abstract
We present MoBGS, a novel deblurring dynamic 3D Gaussian Splatting (3DGS) framework capable of reconstructing sharp and high-quality novel spatio-temporal views from blurry monocular videos in an end-to-end manner. Existing dynamic novel view synthesis (NVS) methods are highly sensitive to motion blur in casually captured videos, resulting in significant degradation of rendering quality. While recent approaches address motion-blurred inputs for NVS, they primarily focus on static scene reconstruction and lack dedicated motion modeling for dynamic objects. To overcome these limitations, our MoBGS introduces a novel Blur-adaptive Latent Camera Estimation (BLCE) method for effective latent camera trajectory estimation, improving global camera motion deblurring. In addition, we propose a physically-inspired Latent Camera-induced Exposure Estimation (LCEE) method to ensure consistent deblurring of both global camera and local object motion. Our MoBGS framework ensures the temporal consistency of unseen latent timestamps and robust motion decomposition of static and dynamic regions. Extensive experiments on the Stereo Blur dataset and real-world blurry videos show that our MoBGS significantly outperforms the very recent advanced methods (DyBluRF and Deblur4DGS), achieving state-of-the-art performance for dynamic NVS under motion blur.