🤖 AI Summary
Existing NeRF methods rely on straight-ray volume rendering, making them ill-suited for modeling non-direct light transport in motion-blurred images—leading to geometric ambiguity and reconstruction artifacts. To address this, we propose In-scattering Neural Radiance Fields (Is-NeRF), the first neural radiance field framework that explicitly incorporates scattering physics into differentiable rendering. Is-NeRF establishes a unified scattering-aware rendering model capable of representing six canonical light transport phenomena. It explicitly models incident scattered light paths and jointly optimizes the scene’s radiance field, scattering parameters, and camera motion trajectory. We further introduce an adaptive directional sampling and step-size strategy to enhance gradient accuracy in differentiable rendering. Evaluated on real-world motion-blurred scenes, Is-NeRF achieves significant improvements in geometric reconstruction fidelity and image quality, enabling high-fidelity deblurring 3D reconstruction. Results demonstrate its effectiveness and generalizability in modeling complex light transport.
📝 Abstract
Neural Radiance Fields (NeRF) has gained significant attention for its prominent implicit 3D representation and realistic novel view synthesis capabilities. Available works unexceptionally employ straight-line volume rendering, which struggles to handle sophisticated lightpath scenarios and introduces geometric ambiguities during training, particularly evident when processing motion-blurred images. To address these challenges, this work proposes a novel deblur neural radiance field, Is-NeRF, featuring explicit lightpath modeling in real-world environments. By unifying six common light propagation phenomena through an in-scattering representation, we establish a new scattering-aware volume rendering pipeline adaptable to complex lightpaths. Additionally, we introduce an adaptive learning strategy that enables autonomous determining of scattering directions and sampling intervals to capture finer object details. The proposed network jointly optimizes NeRF parameters, scattering parameters, and camera motions to recover fine-grained scene representations from blurry images. Comprehensive evaluations demonstrate that it effectively handles complex real-world scenarios, outperforming state-of-the-art approaches in generating high-fidelity images with accurate geometric details.