NeRFPrior: Learning Neural Radiance Field as a Prior for Indoor Scene Reconstruction

📅 2025-03-24
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address poor geometric and textural consistency and heavy reliance on large-scale pretraining in multi-view RGB reconstruction for indoor scenes, this paper proposes a NeRF-guided signed distance function (SDF) reconstruction framework. Our method leverages NeRF as a lightweight, scene-adaptive prior that jointly provides geometric constraints—via implicit SDF surfaces—and photometric constraints—via volumetric rendering. We introduce a ray-intersection-based multi-view geometric consistency loss and a confidence-weighted depth consistency loss, significantly improving robustness in texture-deprived regions. Crucially, the approach requires no additional training data or large-scale pretraining. Evaluated on standard benchmarks including ScanNet and Matterport3D, it achieves more complete and higher-fidelity geometric and textural reconstructions, consistently outperforming state-of-the-art methods.

Technology Category

Application Category

📝 Abstract
Recently, it has shown that priors are vital for neural implicit functions to reconstruct high-quality surfaces from multi-view RGB images. However, current priors require large-scale pre-training, and merely provide geometric clues without considering the importance of color. In this paper, we present NeRFPrior, which adopts a neural radiance field as a prior to learn signed distance fields using volume rendering for surface reconstruction. Our NeRF prior can provide both geometric and color clues, and also get trained fast under the same scene without additional data. Based on the NeRF prior, we are enabled to learn a signed distance function (SDF) by explicitly imposing a multi-view consistency constraint on each ray intersection for surface inference. Specifically, at each ray intersection, we use the density in the prior as a coarse geometry estimation, while using the color near the surface as a clue to check its visibility from another view angle. For the textureless areas where the multi-view consistency constraint does not work well, we further introduce a depth consistency loss with confidence weights to infer the SDF. Our experimental results outperform the state-of-the-art methods under the widely used benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Improving surface reconstruction from RGB images
Combining geometric and color clues in priors
Enhancing textureless area depth consistency
Innovation

Methods, ideas, or system contributions that make the work stand out.

NeRF prior for fast geometric and color learning
Multi-view consistency constraint for surface inference
Depth consistency loss for textureless areas
🔎 Similar Papers
No similar papers found.
W
Wenyuan Zhang
School of Software, Tsinghua University, Beijing, China
E
Emily Yue-ting Jia
School of Software, Tsinghua University, Beijing, China
Junsheng Zhou
Junsheng Zhou
Tsinghua University
3D computer vision
Baorui Ma
Baorui Ma
Tsinghua University
K
Kanle Shi
Kuaishou Technology, Beijing, China
Y
Yu-Shen Liu
School of Software, Tsinghua University, Beijing, China