LLGS: Unsupervised Gaussian Splatting for Image Enhancement and Reconstruction in Pure Dark Environment

📅 2025-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address critical challenges in 3D Gaussian Splatting (3DGS) under pure darkness—including deficient color representation, multi-view inconsistency, and poor generalization—this paper proposes the first end-to-end, unsupervised 3DGS optimization framework for low-light modeling. Our method introduces: (1) M-Color decomposable Gaussian representation, enabling differentiable, disentangled modeling of geometry, illumination, and chromaticity; and (2) a zero-knowledge-guided enhancement mechanism that jointly optimizes geometry and illumination in an unsupervised manner, incorporating direction-aware enhancement without paired data or pretrained priors. Evaluated on real-world low-light datasets, our approach achieves state-of-the-art performance in both image enhancement (PSNR/SSIM) and 3D reconstruction (Chamfer Distance), marking the first solution to deliver multi-view consistent, high-fidelity, and fully unsupervised 3D modeling under extreme low-light conditions.

Technology Category

Application Category

📝 Abstract
3D Gaussian Splatting has shown remarkable capabilities in novel view rendering tasks and exhibits significant potential for multi-view optimization.However, the original 3D Gaussian Splatting lacks color representation for inputs in low-light environments. Simply using enhanced images as inputs would lead to issues with multi-view consistency, and current single-view enhancement systems rely on pre-trained data, lacking scene generalization. These problems limit the application of 3D Gaussian Splatting in low-light conditions in the field of robotics, including high-fidelity modeling and feature matching. To address these challenges, we propose an unsupervised multi-view stereoscopic system based on Gaussian Splatting, called Low-Light Gaussian Splatting (LLGS). This system aims to enhance images in low-light environments while reconstructing the scene. Our method introduces a decomposable Gaussian representation called M-Color, which separately characterizes color information for targeted enhancement. Furthermore, we propose an unsupervised optimization method with zero-knowledge priors, using direction-based enhancement to ensure multi-view consistency. Experiments conducted on real-world datasets demonstrate that our system outperforms state-of-the-art methods in both low-light enhancement and 3D Gaussian Splatting.
Problem

Research questions and friction points this paper is trying to address.

Enhancing images in pure dark environments without pre-trained data
Ensuring multi-view consistency in low-light 3D reconstruction
Overcoming color representation limitations in low-light Gaussian Splatting
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unsupervised multi-view stereoscopic system
Decomposable Gaussian representation M-Color
Direction-based enhancement for consistency
🔎 Similar Papers
No similar papers found.
H
Haoran Wang
School of Engineering and Informatics, University of Sussex
J
Jingwei Huang
Department of Automation Engineering, University of Electronic Science and Technology of China
L
Lu Yang
Department of Automation Engineering, University of Electronic Science and Technology of China
Tianchen Deng
Tianchen Deng
Shanghai Jiao Tong University
RoboticsComputer Vision
Gaojing Zhang
Gaojing Zhang
M.S. student
SLAMEnvironment Awareness
Mingrui Li
Mingrui Li
Dalian University of Technology
SLAM3D VisionRobotics