Uncertainty-Aware Diffusion Guided Refinement of 3D Scenes

📅 2025-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Single-image 3D reconstruction suffers from severe viewpoint ambiguity, leading to blurry novel-view synthesis and geometric inconsistency—particularly in unobserved regions. To address this, we propose the first Gaussian scene optimization framework that jointly incorporates semantic uncertainty quantification and diffusion-based priors. Our method introduces a pixel-wise entropy-driven uncertainty map to guide iterative refinement of differentiable Gaussian parameters via a latent video diffusion model. Additionally, we integrate real-time Fourier-domain style transfer to explicitly align input image textures with generated views. Crucially, our approach requires neither multi-view supervision nor depth priors. Evaluated on RealEstate-10K and KITTI-v2, it achieves significant improvements in visual fidelity and geometric consistency of novel views, surpassing current state-of-the-art methods.

Technology Category

Application Category

📝 Abstract
Reconstructing 3D scenes from a single image is a fundamentally ill-posed task due to the severely under-constrained nature of the problem. Consequently, when the scene is rendered from novel camera views, existing single image to 3D reconstruction methods render incoherent and blurry views. This problem is exacerbated when the unseen regions are far away from the input camera. In this work, we address these inherent limitations in existing single image-to-3D scene feedforward networks. To alleviate the poor performance due to insufficient information beyond the input image's view, we leverage a strong generative prior in the form of a pre-trained latent video diffusion model, for iterative refinement of a coarse scene represented by optimizable Gaussian parameters. To ensure that the style and texture of the generated images align with that of the input image, we incorporate on-the-fly Fourier-style transfer between the generated images and the input image. Additionally, we design a semantic uncertainty quantification module that calculates the per-pixel entropy and yields uncertainty maps used to guide the refinement process from the most confident pixels while discarding the remaining highly uncertain ones. We conduct extensive experiments on real-world scene datasets, including in-domain RealEstate-10K and out-of-domain KITTI-v2, showing that our approach can provide more realistic and high-fidelity novel view synthesis results compared to existing state-of-the-art methods.
Problem

Research questions and friction points this paper is trying to address.

Reconstructing 3D scenes from single images is ill-posed.
Existing methods produce blurry views from novel camera angles.
Unseen regions far from the input camera worsen reconstruction quality.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses pre-trained latent video diffusion model
Incorporates Fourier-style transfer for alignment
Employs semantic uncertainty quantification module
🔎 Similar Papers
No similar papers found.