SparseRecon: Neural Implicit Surface Reconstruction from Sparse Views with Feature and Depth Consistencies

πŸ“… 2025-08-01
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Sparse multi-view inputs suffer from insufficient geometric cues, leading to incomplete and detail-deficient 3D surface reconstruction. To address this, we propose a neural implicit field-based reconstruction method. Our key contributions are: (1) a volume-rendering-driven cross-view feature consistency loss that mitigates geometric ambiguity in occluded and textureless regions; and (2) an uncertainty-guided depth constraint that strengthens geometric regularization under weak supervision. By jointly optimizing multi-view stereo consistency, our method achieves significant improvements in reconstruction completeness, surface smoothness, and geometric fidelityβ€”even from extremely sparse inputs (e.g., only 3–5 images) with low inter-frame overlap. Experiments on ScanNet and DTU benchmarks demonstrate superior performance over both generalizable and overfitting-based state-of-the-art methods, particularly in challenging sparse-view scenarios, yielding high-accuracy and robust 3D surface reconstructions.

Technology Category

Application Category

πŸ“ Abstract
Surface reconstruction from sparse views aims to reconstruct a 3D shape or scene from few RGB images. The latest methods are either generalization-based or overfitting-based. However, the generalization-based methods do not generalize well on views that were unseen during training, while the reconstruction quality of overfitting-based methods is still limited by the limited geometry clues. To address this issue, we propose SparseRecon, a novel neural implicit reconstruction method for sparse views with volume rendering-based feature consistency and uncertainty-guided depth constraint. Firstly, we introduce a feature consistency loss across views to constrain the neural implicit field. This design alleviates the ambiguity caused by insufficient consistency information of views and ensures completeness and smoothness in the reconstruction results. Secondly, we employ an uncertainty-guided depth constraint to back up the feature consistency loss in areas with occlusion and insignificant features, which recovers geometry details for better reconstruction quality. Experimental results demonstrate that our method outperforms the state-of-the-art methods, which can produce high-quality geometry with sparse-view input, especially in the scenarios with small overlapping views. Project page: https://hanl2010.github.io/SparseRecon/.
Problem

Research questions and friction points this paper is trying to address.

Reconstruct 3D surfaces from few RGB images
Improve generalization on unseen sparse views
Enhance geometry details with limited input
Innovation

Methods, ideas, or system contributions that make the work stand out.

Feature consistency loss for neural implicit field
Uncertainty-guided depth constraint for occlusion
Volume rendering-based feature and depth consistencies
πŸ”Ž Similar Papers
No similar papers found.
L
Liang Han
School of Software, Tsinghua University, Beijing, China
X
Xu Zhang
China Telecom, Beijing, China
H
Haichuan Song
Computer Science and Technology, East China Normal University, Shanghai, China
K
Kanle Shi
Kuaishou Technology, Beijing, China
Y
Yu-Shen Liu
School of Software, Tsinghua University, Beijing, China
Zhizhong Han
Zhizhong Han
Assistant Professor of Computer Science at Wayne State University
3D Computer VisionDigital Geometry ProcessingArtificial IntelligenceARVR