🤖 AI Summary
This work addresses the poor reconstruction quality and excessive training time of Neural Radiance Fields (NeRF) under motion and defocus blur. To this end, we propose DeepDeblurRF—a novel framework featuring a radiation-field-guided iterative deblurring mechanism, co-optimized alternately with radiance field reconstruction for joint blur correction and scene modeling. Our approach supports multiple scene representations, including voxel grids and 3D Gaussians. Furthermore, we introduce BlurRF-Synth, the first large-scale synthetic dataset specifically designed for blurred NeRF training. Extensive experiments demonstrate that DeepDeblurRF achieves state-of-the-art novel-view synthesis performance on both motion- and defocus-blurred scenes, reduces training time significantly compared to baseline methods, and exhibits strong generalization across diverse blur types and scene geometries.
📝 Abstract
In this paper, we propose DeepDeblurRF, a novel radiance field deblurring approach that can synthesize high-quality novel views from blurred training views with significantly reduced training time. DeepDeblurRF leverages deep neural network (DNN)-based deblurring modules to enjoy their deblurring performance and computational efficiency. To effectively combine DNN-based deblurring and radiance field construction, we propose a novel radiance field (RF)-guided deblurring and an iterative framework that performs RF-guided deblurring and radiance field construction in an alternating manner. Moreover, DeepDeblurRF is compatible with various scene representations, such as voxel grids and 3D Gaussians, expanding its applicability. We also present BlurRF-Synth, the first large-scale synthetic dataset for training radiance field deblurring frameworks. We conduct extensive experiments on both camera motion blur and defocus blur, demonstrating that DeepDeblurRF achieves state-of-the-art novel-view synthesis quality with significantly reduced training time.