S2D: Sparse to Dense Lifting for 3D Reconstruction with Minimal Inputs

📅 2026-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing 3D representations, such as point clouds and 3D Gaussian splatting, often suffer from rendering artifacts or significant quality degradation under sparse input conditions. To address this limitation, this work proposes the S2D framework, which employs a two-stage strategy to enhance sparse point clouds into high-quality, dense, and view-consistent 3D Gaussian splatting representations. The key innovation lies in the first-time integration of a one-step diffusion model for both point cloud densification and image artifact inpainting, complemented by a stochastic sample dropout mechanism and weighted gradient optimization to improve reconstruction robustness and consistency. Experimental results demonstrate that S2D achieves high-fidelity novel view synthesis even under extremely sparse viewing conditions, consistently attaining state-of-the-art reconstruction quality across various sparsity levels and substantially reducing the dependency of 3D Gaussian splatting on the number of input views.

Technology Category

Application Category

📝 Abstract
Explicit 3D representations have already become an essential medium for 3D simulation and understanding. However, the most commonly used point cloud and 3D Gaussian Splatting (3DGS) each suffer from non-photorealistic rendering and significant degradation under sparse inputs. In this paper, we introduce Sparse to Dense lifting (S2D), a novel pipeline that bridges the two representations and achieves high-quality 3DGS reconstruction with minimal inputs. Specifically, the S2D lifting is two-fold. We first present an efficient one-step diffusion model that lifts sparse point cloud for high-fidelity image artifact fixing. Meanwhile, to reconstruct 3D consistent scenes, we also design a corresponding reconstruction strategy with random sample drop and weighted gradient for robust model fitting from sparse input views to dense novel views. Extensive experiments show that S2D achieves the best consistency in generating novel view guidance and first-tier sparse view reconstruction quality under different input sparsity. By reconstructing stable scenes with the least possible captures among existing methods, S2D enables minimal input requirements for 3DGS applications.
Problem

Research questions and friction points this paper is trying to address.

3D reconstruction
sparse inputs
3D Gaussian Splatting
point cloud
novel view synthesis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparse to Dense Lifting
3D Gaussian Splatting
diffusion model
sparse input reconstruction
novel view synthesis
🔎 Similar Papers
No similar papers found.