🤖 AI Summary
This work addresses the challenge that under dynamic illumination, scene radiance becomes highly entangled with lighting color, hindering neural radiance fields from achieving consistent geometry reconstruction and editability. To resolve this, the authors propose leveraging data captured during a “rehearsal phase” under stable lighting to learn a time-dependent illumination vector that disentangles lighting from the scene’s intrinsic radiance. By further integrating interactive masks and optical flow regularization, the method effectively separates dynamic objects from the static background. This approach substantially improves the robustness of novel view synthesis and the quality of scene editing under varying lighting conditions. The code and a new video dataset will be publicly released.
📝 Abstract
Although there has been significant progress in neural radiance fields, an issue on dynamic illumination changes still remains unsolved. Different from relevant works that parameterize time-variant/-invariant components in scenes, subjects' radiance is highly entangled with their own emitted radiance and lighting colors in spatio-temporal domain. In this paper, we present a new effective method to learn disentangled neural fields under the severe illumination changes, named RehearsalNeRF. Our key idea is to leverage scenes captured under stable lighting like rehearsal stages, easily taken before dynamic illumination occurs, to enforce geometric consistency between the different lighting conditions. In particular, RehearsalNeRF employs a learnable vector for lighting effects which represents illumination colors in a temporal dimension and is used to disentangle projected light colors from scene radiance. Furthermore, our RehearsalNeRF is also able to reconstruct the neural fields of dynamic objects by simply adopting off-the-shelf interactive masks. To decouple the dynamic objects, we propose a new regularization leveraging optical flow, which provides coarse supervision for the color disentanglement. We demonstrate the effectiveness of RehearsalNeRF by showing robust performances on novel view synthesis and scene editing under dynamic illumination conditions. Our source code and video datasets will be publicly available.