DenoiseGS: Gaussian Reconstruction Model for Burst Denoising

📅 2025-11-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing burst denoising methods struggle to simultaneously ensure robustness to large motions and computational efficiency, often causing geometric degradation of 3D Gaussian point clouds and loss of high-frequency details. This paper proposes the first efficient 3D Gaussian point lattice denoising framework tailored for handheld burst imaging. We innovatively introduce a Gaussian self-consistency loss (GSC) and a logarithmically weighted frequency-domain loss (LWF), enabling geometric regularization via model-self-generated high-quality point clouds and enhancing high-frequency detail recovery while avoiding domain shift. Our method integrates 3D Gaussian reconstruction, a lightweight feed-forward network, and self-supervised learning. Experiments demonstrate that our approach significantly outperforms NeRF-based baselines on both burst denoising and novel-view synthesis under noise, achieving a 250× speedup in inference time. The framework enables high-fidelity, real-time-capable 3D reconstruction.

Technology Category

Application Category

📝 Abstract
Burst denoising methods are crucial for enhancing images captured on handheld devices, but they often struggle with large motion or suffer from prohibitive computational costs. In this paper, we propose DenoiseGS, the first framework to leverage the efficiency of 3D Gaussian Splatting for burst denoising. Our approach addresses two key challenges when applying feedforward Gaussian reconsturction model to noisy inputs: the degradation of Gaussian point clouds and the loss of fine details. To this end, we propose a Gaussian self-consistency (GSC) loss, which regularizes the geometry predicted from noisy inputs with high-quality Gaussian point clouds. These point clouds are generated from clean inputs by the same model that we are training, thereby alleviating potential bias or domain gaps. Additionally, we introduce a log-weighted frequency (LWF) loss to strengthen supervision within the spectral domain, effectively preserving fine-grained details. The LWF loss adaptively weights frequency discrepancies in a logarithmic manner, emphasizing challenging high-frequency details. Extensive experiments demonstrate that DenoiseGS significantly exceeds the state-of-the-art NeRF-based methods on both burst denoising and novel view synthesis under noisy conditions, while achieving extbf{250$ imes$} faster inference speed. Code and models are released at https://github.com/yscheng04/DenoiseGS.
Problem

Research questions and friction points this paper is trying to address.

Enhances burst image denoising for handheld devices under large motion
Addresses degradation of Gaussian point clouds from noisy input images
Preserves fine details in images through spectral domain supervision
Innovation

Methods, ideas, or system contributions that make the work stand out.

3D Gaussian Splatting for burst denoising
Gaussian self-consistency loss for geometry regularization
Log-weighted frequency loss for detail preservation
🔎 Similar Papers
No similar papers found.
Y
Yongsen Cheng
Shanghai Jiao Tong University
Yuanhao Cai
Yuanhao Cai
Johns Hopkins University | Tsinghua University | Meta Superintelligence Labs
Generative AI3D VisionComputational Photography
Y
Yulun Zhang
Shanghai Jiao Tong University