π€ AI Summary
Sparse multi-view inputs suffer from insufficient geometric cues, leading to incomplete and detail-deficient 3D surface reconstruction. To address this, we propose a neural implicit field-based reconstruction method. Our key contributions are: (1) a volume-rendering-driven cross-view feature consistency loss that mitigates geometric ambiguity in occluded and textureless regions; and (2) an uncertainty-guided depth constraint that strengthens geometric regularization under weak supervision. By jointly optimizing multi-view stereo consistency, our method achieves significant improvements in reconstruction completeness, surface smoothness, and geometric fidelityβeven from extremely sparse inputs (e.g., only 3β5 images) with low inter-frame overlap. Experiments on ScanNet and DTU benchmarks demonstrate superior performance over both generalizable and overfitting-based state-of-the-art methods, particularly in challenging sparse-view scenarios, yielding high-accuracy and robust 3D surface reconstructions.
π Abstract
Surface reconstruction from sparse views aims to reconstruct a 3D shape or scene from few RGB images. The latest methods are either generalization-based or overfitting-based. However, the generalization-based methods do not generalize well on views that were unseen during training, while the reconstruction quality of overfitting-based methods is still limited by the limited geometry clues. To address this issue, we propose SparseRecon, a novel neural implicit reconstruction method for sparse views with volume rendering-based feature consistency and uncertainty-guided depth constraint. Firstly, we introduce a feature consistency loss across views to constrain the neural implicit field. This design alleviates the ambiguity caused by insufficient consistency information of views and ensures completeness and smoothness in the reconstruction results. Secondly, we employ an uncertainty-guided depth constraint to back up the feature consistency loss in areas with occlusion and insignificant features, which recovers geometry details for better reconstruction quality. Experimental results demonstrate that our method outperforms the state-of-the-art methods, which can produce high-quality geometry with sparse-view input, especially in the scenarios with small overlapping views. Project page: https://hanl2010.github.io/SparseRecon/.