🤖 AI Summary
This work addresses the challenge of reconstructing hair-strand-level 3D hair models from a single portrait image, where maintaining realistic detail in occluded regions remains difficult. The authors reformulate the task as a calibrated multi-view reconstruction problem and introduce, for the first time, 3D priors derived from video generation models. They propose a two-stage hair strand growing algorithm driven by a hybrid implicit field, incorporating a neural orientation extractor trained on sparse real-world annotations. This approach significantly outperforms existing methods in both visible and occluded regions, achieving high-fidelity, geometrically consistent hair reconstructions across diverse hairstyles at the individual strand level.
📝 Abstract
Reconstructing strand-level 3D hair from a single-view image is highly challenging, especially when preserving consistent and realistic attributes in unseen regions. Existing methods rely on limited frontal-view cues and small-scale/style-restricted synthetic data, often failing to produce satisfactory results in invisible regions. In this work, we propose a novel framework that leverages the strong 3D priors of video generation models to transform single-view hair reconstruction into a calibrated multi-view reconstruction task. To balance reconstruction quality and efficiency for the reformulated multi-view task, we further introduce a neural orientation extractor trained on sparse real-image annotations for better full-view orientation estimation. In addition, we design a two-stage strand-growing algorithm based on a hybrid implicit field to synthesize the 3D strand curves with fine-grained details at a relatively fast speed. Extensive experiments demonstrate that our method achieves state-of-the-art performance on single-view 3D hair strand reconstruction on a diverse range of hair portraits in both visible and invisible regions.