🤖 AI Summary
This work addresses the ill-posed problem of high-fidelity 3D reconstruction from a single color image under near-point lighting and non-Lambertian reflectance, conditions under which existing photometric stereo methods struggle. To this end, we propose the first approach that leverages neural implicit representations to jointly model surface geometry and spatially varying BRDF. By incorporating a chromaticity constancy prior, our method enhances solution uniqueness without requiring multi-view or multi-light observations. The resulting framework enables robust reconstruction from a single RGB image and is validated through a custom-designed compact optical tactile sensor in real-world scenarios. Extensive experiments on both synthetic and real datasets demonstrate that our method significantly outperforms state-of-the-art techniques in terms of accuracy and practical applicability.
📝 Abstract
Color photometric stereo enables single-shot surface reconstruction, extending conventional photometric stereo that requires multiple images of a static scene under varying illumination to dynamic scenarios. However, most existing approaches assume ideal distant lighting and Lambertian reflectance, leaving more practical near-light conditions and non-Lambertian surfaces underexplored. To overcome this limitation, we propose a framework that leverages neural implicit representations for depth and BRDF modeling under the assumption of mono-chromaticity (uniform chromaticity and homogeneous material), which alleviates the inherent ill-posedness of color photometric stereo and allows for detailed surface recovery from just one image. Furthermore, we design a compact optical tactile sensor to validate our approach. Experiments on both synthetic and real-world datasets demonstrate that our method achieves accurate and robust surface reconstruction.