π€ AI Summary
This work addresses the degradation of 3D reconstruction quality in Neural Radiance Fields (NeRF) when trained on motion-blurred images. To this end, we propose an end-to-end framework that jointly performs motion deblurring and novel-view synthesis. Our key innovation is the first integration of explicit, continuous camera motion modeling into the NeRF optimization pipeline, realized via a Continuous Motion Blur Kernel (CMBK)βa physically grounded, differentiable blur model. The method jointly optimizes both the camera trajectory and the sceneβs radiance field within a continuous volumetric rendering framework, enabling simultaneous motion deblurring and geometrically consistent novel-view synthesis. Evaluated on standard benchmarks, our approach achieves state-of-the-art performance, with significant improvements in PSNR and SSIM. Qualitatively, the synthesized views exhibit superior sharpness and strong geometric consistency across viewpoints.
π Abstract
Neural radiance fields (NeRF) has attracted considerable attention for their exceptional ability in synthesizing novel views with high fidelity. However, the presence of motion blur, resulting from slight camera movements during extended shutter exposures, poses a significant challenge, potentially compromising the quality of the reconstructed 3D scenes. To effectively handle this issue, we propose sequential motion understanding radiance fields (SMURF), a novel approach that models continuous camera motion and leverages the explicit volumetric representation method for robustness to motion-blurred input images. The core idea of the SMURF is continuous motion blurring kernel (CMBK), a module designed to model a continuous camera movements for processing blurry inputs. Our model is evaluated against benchmark datasets and demonstrates state-of-the-art performance both quantitatively and qualitatively.