Multi-view Surface Reconstruction Using Normal and Reflectance Cues

📅 2025-06-04
📈 Citations: 0
✹ Influential: 0
📄 PDF
đŸ€– AI Summary
Sparse multi-view reconstruction of complex materials—such as specular and subsurface-scattering surfaces—suffers from low geometric fidelity and detail loss. To address this, we propose a radiance-based pixel-wise joint reparameterization framework. Our core innovation is the first unified radiance vector representation jointly encoding surface normals and albedo, which tightly integrates multi-view geometric constraints with photometric stereo priors and seamlessly embeds into neural implicit pipelines (e.g., NeuS2). The method significantly improves fine-grained geometry recovery, maintains robustness under occlusion and in weakly-textured regions, and ensures computational efficiency and optimization stability. We achieve state-of-the-art performance on major multi-view photometric stereo benchmarks—including DiLiGenT-MV, LUCES-MV, and Skoltech3D—with substantial gains in both geometric detail fidelity and visibility robustness.

Technology Category

Application Category

📝 Abstract
Achieving high-fidelity 3D surface reconstruction while preserving fine details remains challenging, especially in the presence of materials with complex reflectance properties and without a dense-view setup. In this paper, we introduce a versatile framework that incorporates multi-view normal and optionally reflectance maps into radiance-based surface reconstruction. Our approach employs a pixel-wise joint re-parametrization of reflectance and surface normals, representing them as a vector of radiances under simulated, varying illumination. This formulation enables seamless incorporation into standard surface reconstruction pipelines, such as traditional multi-view stereo (MVS) frameworks or modern neural volume rendering (NVR) ones. Combined with the latter, our approach achieves state-of-the-art performance on multi-view photometric stereo (MVPS) benchmark datasets, including DiLiGenT-MV, LUCES-MV and Skoltech3D. In particular, our method excels in reconstructing fine-grained details and handling challenging visibility conditions. The present paper is an extended version of the earlier conference paper by Brument et al. (in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024), featuring an accelerated and more robust algorithm as well as a broader empirical evaluation. The code and data relative to this article is available at https://github.com/RobinBruneau/RNb-NeuS2.
Problem

Research questions and friction points this paper is trying to address.

Reconstructing high-fidelity 3D surfaces with fine details
Handling materials with complex reflectance properties
Achieving accurate reconstruction without dense-view setups
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-view normal and reflectance integration
Pixel-wise joint re-parametrization technique
Neural volume rendering enhanced reconstruction
🔎 Similar Papers
R
Robin Bruneau
DQBM, University of Zurich, Switzerland.
B
Baptiste Brument
IRIT, UMR CNRS 5505, Université de Toulouse, France.
Y
Yvain Qu'eau
GREYC, CNRS, UNICAEN, ENSICAEN, Normandie Université, France.
J
Jean M'elou
IRIT, UMR CNRS 5505, Université de Toulouse, France.; FittingBox, Toulouse, France.
F
Franccois Lauze
DIKU, University of Copenhagen, Denmark.
J
Jean-Denis Durou
IRIT, UMR CNRS 5505, Université de Toulouse, France.
Lilian Calvet
Lilian Calvet
Postdoc in Computer Vision
computer visionmachine learningaugmented realitymedical imagingcomputer-assisted interventions