🤖 AI Summary
This paper addresses the challenge of high-fidelity 3D reconstruction and illumination decomposition for multi-view transient light transport videos under strong indirect illumination. We propose the first physics-driven temporal neural inverse rendering framework. Methodologically, we extend neural radiance fields (NeRF) to the time-resolved transient domain for the first time, establishing a physically constrained direct/indirect light transport model compatible with Flash LiDAR multi-view temporal inputs. Our key contributions are: (1) joint high-accuracy reconstruction of dynamic 3D geometry and illumination, achieving state-of-the-art reconstruction fidelity under dominant indirect lighting; (2) support for transient view synthesis, automatic direct/indirect light decomposition, and real-time multi-view temporal relighting. This work establishes a differentiable, physically consistent paradigm for non-line-of-sight imaging and穿透式 rendering—here rendered as “penetrative rendering” in context—advancing both theoretical modeling and practical applicability in transient imaging.
📝 Abstract
We present the first system for physically based, neural inverse rendering from multi-viewpoint videos of propagating light. Our approach relies on a time-resolved extension of neural radiance caching -- a technique that accelerates inverse rendering by storing infinite-bounce radiance arriving at any point from any direction. The resulting model accurately accounts for direct and indirect light transport effects and, when applied to captured measurements from a flash lidar system, enables state-of-the-art 3D reconstruction in the presence of strong indirect light. Further, we demonstrate view synthesis of propagating light, automatic decomposition of captured measurements into direct and indirect components, as well as novel capabilities such as multi-view time-resolved relighting of captured scenes.