SplatBright: Generalizable Low-Light Scene Reconstruction from Sparse Views via Physically-Guided Gaussian Enhancement

📅 2025-12-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing low-light 3D reconstruction methods under sparse views suffer from exposure inconsistency, color distortion, and strong scene dependency. Method: This paper proposes the first generalizable 3D Gaussian reconstruction framework, featuring: (1) a physics-based camera model for synthesizing dark views to enable end-to-end low-light enhancement; (2) a physics-guided dual-branch geometric initialization that disentangles the illumination-material-view trinity in appearance representation; and (3) a frequency-prior-driven cross-view illumination consistency constraint to jointly optimize geometry and appearance. Contribution/Results: The method achieves zero-shot generalization to unseen low-light scenes. It significantly outperforms state-of-the-art 2D and 3D low-light reconstruction approaches on both public and custom datasets, delivering superior novel-view synthesis quality, cross-view consistency, and generalization capability.

Technology Category

Application Category

📝 Abstract
Low-light 3D reconstruction from sparse views remains challenging due to exposure imbalance and degraded color fidelity. While existing methods struggle with view inconsistency and require per-scene training, we propose SplatBright, which is, to our knowledge, the first generalizable 3D Gaussian framework for joint low-light enhancement and reconstruction from sparse sRGB inputs. Our key idea is to integrate physically guided illumination modeling with geometry-appearance decoupling for consistent low-light reconstruction. Specifically, we adopt a dual-branch predictor that provides stable geometric initialization of 3D Gaussian parameters. On the appearance side, illumination consistency leverages frequency priors to enable controllable and cross-view coherent lighting, while an appearance refinement module further separates illumination, material, and view-dependent cues to recover fine texture. To tackle the lack of large-scale geometrically consistent paired data, we synthesize dark views via a physics-based camera model for training. Extensive experiments on public and self-collected datasets demonstrate that SplatBright achieves superior novel view synthesis, cross-view consistency, and better generalization to unseen low-light scenes compared with both 2D and 3D methods.
Problem

Research questions and friction points this paper is trying to address.

Reconstructs 3D scenes from sparse low-light sRGB views
Enhances images by modeling illumination and separating appearance cues
Generalizes to unseen scenes without per-scene training
Innovation

Methods, ideas, or system contributions that make the work stand out.

3D Gaussian framework for low-light enhancement and reconstruction
Dual-branch predictor for stable geometric initialization
Physics-based camera model synthesizes dark views for training
🔎 Similar Papers
No similar papers found.
Yue Wen
Yue Wen
University of Central Florida
ProstheticsRehabilitation roboticsMachine learningAdaptive controlNeural interface
L
Liang Song
China DXR Technology CO.,Ltd
H
Hesheng Wang
Shanghai Jiao Tong University