🤖 AI Summary
This paper addresses the challenging problem of 3D scene reconstruction from multi-view images under unknown camera poses and low-light conditions. We propose the first end-to-end, generalizable single-forward framework that eliminates per-scene optimization. Our method employs a 3D Gaussian representation to jointly model geometry and illumination, featuring a geometry-anchored backbone network and a cross-illumination knowledge distillation mechanism—where a teacher network transfers depth priors—and introduces the Lumos loss to enforce multi-view photometric consistency. Unlike existing approaches, ours requires no pose initialization or scene-specific fine-tuning, significantly improving reconstruction fidelity. On real-world low-light datasets, it achieves high-accuracy recovery of both geometry and texture. Moreover, the framework demonstrates strong generalization to unseen scenes and robust, natural handling of overexposed regions.
📝 Abstract
Restoring 3D scenes captured under low-light con- ditions remains a fundamental yet challenging problem. Most existing approaches depend on precomputed camera poses and scene-specific optimization, which greatly restricts their scala- bility to dynamic real-world environments. To overcome these limitations, we introduce Lumos3D, a generalizable pose-free framework for 3D low-light scene restoration. Trained once on a single dataset, Lumos3D performs inference in a purely feed- forward manner, directly restoring illumination and structure from unposed, low-light multi-view images without any per- scene training or optimization. Built upon a geometry-grounded backbone, Lumos3D reconstructs a normal-light 3D Gaussian representation that restores illumination while faithfully pre- serving structural details. During training, a cross-illumination distillation scheme is employed, where the teacher network is distilled on normal-light ground truth to transfer accurate geometric information, such as depth, to the student model. A dedicated Lumos loss is further introduced to promote photomet- ric consistency within the reconstructed 3D space. Experiments on real-world datasets demonstrate that Lumos3D achieves high- fidelity low-light 3D scene restoration with accurate geometry and strong generalization to unseen cases. Furthermore, the framework naturally extends to handle over-exposure correction, highlighting its versatility for diverse lighting restoration tasks.