Is-NeRF: In-scattering Neural Radiance Field for Blurred Images

📅 2025-08-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing NeRF methods rely on straight-ray volume rendering, making them ill-suited for modeling non-direct light transport in motion-blurred images—leading to geometric ambiguity and reconstruction artifacts. To address this, we propose In-scattering Neural Radiance Fields (Is-NeRF), the first neural radiance field framework that explicitly incorporates scattering physics into differentiable rendering. Is-NeRF establishes a unified scattering-aware rendering model capable of representing six canonical light transport phenomena. It explicitly models incident scattered light paths and jointly optimizes the scene’s radiance field, scattering parameters, and camera motion trajectory. We further introduce an adaptive directional sampling and step-size strategy to enhance gradient accuracy in differentiable rendering. Evaluated on real-world motion-blurred scenes, Is-NeRF achieves significant improvements in geometric reconstruction fidelity and image quality, enabling high-fidelity deblurring 3D reconstruction. Results demonstrate its effectiveness and generalizability in modeling complex light transport.

Technology Category

Application Category

📝 Abstract
Neural Radiance Fields (NeRF) has gained significant attention for its prominent implicit 3D representation and realistic novel view synthesis capabilities. Available works unexceptionally employ straight-line volume rendering, which struggles to handle sophisticated lightpath scenarios and introduces geometric ambiguities during training, particularly evident when processing motion-blurred images. To address these challenges, this work proposes a novel deblur neural radiance field, Is-NeRF, featuring explicit lightpath modeling in real-world environments. By unifying six common light propagation phenomena through an in-scattering representation, we establish a new scattering-aware volume rendering pipeline adaptable to complex lightpaths. Additionally, we introduce an adaptive learning strategy that enables autonomous determining of scattering directions and sampling intervals to capture finer object details. The proposed network jointly optimizes NeRF parameters, scattering parameters, and camera motions to recover fine-grained scene representations from blurry images. Comprehensive evaluations demonstrate that it effectively handles complex real-world scenarios, outperforming state-of-the-art approaches in generating high-fidelity images with accurate geometric details.
Problem

Research questions and friction points this paper is trying to address.

Handling motion-blurred images in neural rendering
Modeling complex lightpaths to resolve geometric ambiguities
Recovering fine-grained scene details from blurry inputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

In-scattering representation for light propagation
Adaptive learning strategy for scattering directions
Joint optimization of NeRF and scattering parameters
🔎 Similar Papers
No similar papers found.
N
Nan Luo
Xidian University
C
Chenglin Ye
University of the Chinese Academy of Sciences
Jiaxu Li
Jiaxu Li
Central South University
machine learning
G
Gang Liu
Xidian University
Bo Wan
Bo Wan
KU Leuven
computer visionvision-languagevisual scene understanding
D
Di Wang
Xidian University
L
Lupeng Liu
University of the Chinese Academy of Sciences
J
Jun Xiao
University of the Chinese Academy of Sciences