🤖 AI Summary
This work addresses the challenging problem of novel view synthesis in dark 3D scenes degraded by low illumination, noise, and motion blur. To tackle this, we propose FLED-GS, a framework that uniquely integrates low-light enhancement and deblurring into an iterative 3D reconstruction pipeline. By introducing an intermediate brightness anchor to guide progressive restoration, FLED-GS alternately optimizes 2D deblurring and noise-aware 3D Gaussian Splatting (3DGS) reconstruction, effectively decoupling multiple degradation factors and preventing noise amplification from corrupting geometry estimation and deblurring. Compared to the state-of-the-art LuSh-NeRF, our method achieves a 21× faster training speed and an 11× faster rendering speed, delivering significant improvements in both reconstruction quality and computational efficiency.
📝 Abstract
Novel view synthesis from low-light, noisy, and motion-blurred imagery remains a valuable and challenging task. Current volumetric rendering methods struggle with compound degradation, and sequential 2D preprocessing introduces artifacts due to interdependencies. In this work, we introduce FLED-GS, a fast low-light enhancement and deblurring framework that reformulates 3D scene restoration as an alternating cycle of enhancement and reconstruction. Specifically, FLED-GS inserts several intermediate brightness anchors to enable progressive recovery, preventing noise blow-up from harming deblurring or geometry. Each iteration sharpens inputs with an off-the-shelf 2D deblurrer and then performs noise-aware 3DGS reconstruction that estimates and suppresses noise while producing clean priors for the next level. Experiments show FLED-GS outperforms state-of-the-art LuSh-NeRF, achieving 21$\times$ faster training and 11$\times$ faster rendering.