đ€ AI Summary
Sparse multi-view reconstruction of complex materialsâsuch as specular and subsurface-scattering surfacesâsuffers from low geometric fidelity and detail loss. To address this, we propose a radiance-based pixel-wise joint reparameterization framework. Our core innovation is the first unified radiance vector representation jointly encoding surface normals and albedo, which tightly integrates multi-view geometric constraints with photometric stereo priors and seamlessly embeds into neural implicit pipelines (e.g., NeuS2). The method significantly improves fine-grained geometry recovery, maintains robustness under occlusion and in weakly-textured regions, and ensures computational efficiency and optimization stability. We achieve state-of-the-art performance on major multi-view photometric stereo benchmarksâincluding DiLiGenT-MV, LUCES-MV, and Skoltech3Dâwith substantial gains in both geometric detail fidelity and visibility robustness.
đ Abstract
Achieving high-fidelity 3D surface reconstruction while preserving fine details remains challenging, especially in the presence of materials with complex reflectance properties and without a dense-view setup. In this paper, we introduce a versatile framework that incorporates multi-view normal and optionally reflectance maps into radiance-based surface reconstruction. Our approach employs a pixel-wise joint re-parametrization of reflectance and surface normals, representing them as a vector of radiances under simulated, varying illumination. This formulation enables seamless incorporation into standard surface reconstruction pipelines, such as traditional multi-view stereo (MVS) frameworks or modern neural volume rendering (NVR) ones. Combined with the latter, our approach achieves state-of-the-art performance on multi-view photometric stereo (MVPS) benchmark datasets, including DiLiGenT-MV, LUCES-MV and Skoltech3D. In particular, our method excels in reconstructing fine-grained details and handling challenging visibility conditions. The present paper is an extended version of the earlier conference paper by Brument et al. (in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024), featuring an accelerated and more robust algorithm as well as a broader empirical evaluation. The code and data relative to this article is available at https://github.com/RobinBruneau/RNb-NeuS2.