🤖 AI Summary
Existing hair modeling approaches struggle to simultaneously achieve physical plausibility and illumination generalization: analytical BSDF models lack sufficient detail to capture complex scattering phenomena, while neural rendering methods exhibit limited relighting capability. To address this, GroomLight introduces the first hybrid inverse rendering framework that jointly optimizes an extended hair BSDF—explicitly modeling anisotropic scattering and multi-layer microstructure—with an illumination-aware neural residual network. This coupling enables synergistic learning of physics-based priors and data-driven residuals within a unified optimization pipeline. The method supports high-fidelity relighting, novel-view synthesis, and editable material manipulation. Evaluated on real human hair strand data, GroomLight achieves state-of-the-art performance, significantly improving illumination transfer fidelity and recovering fine-scale structural details with unprecedented accuracy.
📝 Abstract
We present GroomLight, a novel method for relightable hair appearance modeling from multi-view images. Existing hair capture methods struggle to balance photorealistic rendering with relighting capabilities. Analytical material models, while physically grounded, often fail to fully capture appearance details. Conversely, neural rendering approaches excel at view synthesis but generalize poorly to novel lighting conditions. GroomLight addresses this challenge by combining the strengths of both paradigms. It employs an extended hair BSDF model to capture primary light transport and a light-aware residual model to reconstruct the remaining details. We further propose a hybrid inverse rendering pipeline to optimize both components, enabling high-fidelity relighting, view synthesis, and material editing. Extensive evaluations on real-world hair data demonstrate state-of-the-art performance of our method.