Im2Haircut: Single-view Strand-based Hair Reconstruction for Human Avatars

📅 2025-09-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Single-image strand-level 3D hair reconstruction faces challenges including hairstyle diversity, geometric complexity, and scarcity of realistic training data. To address these, we propose the first Transformer-based prior model trained jointly on real and synthetic strand-level hair data, enabling unified modeling of both internal hair structure and external hairstyle geometry. Our method integrates differentiable Gaussian splatting for high-fidelity rendering and introduces a local geometric optimization module to enhance fine-scale geometric fidelity. It operates effectively under both single- and multi-view input conditions. Quantitative and qualitative evaluations demonstrate significant improvements in strand orientation accuracy, global silhouette fidelity, and posterior scalp consistency—outperforming state-of-the-art methods across multiple metrics. The reconstructed hair geometry is directly applicable to virtual human animation and driving pipelines.

Technology Category

Application Category

📝 Abstract
We present a novel approach for 3D hair reconstruction from single photographs based on a global hair prior combined with local optimization. Capturing strand-based hair geometry from single photographs is challenging due to the variety and geometric complexity of hairstyles and the lack of ground truth training data. Classical reconstruction methods like multi-view stereo only reconstruct the visible hair strands, missing the inner structure of hairstyles and hampering realistic hair simulation. To address this, existing methods leverage hairstyle priors trained on synthetic data. Such data, however, is limited in both quantity and quality since it requires manual work from skilled artists to model the 3D hairstyles and create near-photorealistic renderings. To address this, we propose a novel approach that uses both, real and synthetic data to learn an effective hairstyle prior. Specifically, we train a transformer-based prior model on synthetic data to obtain knowledge of the internal hairstyle geometry and introduce real data in the learning process to model the outer structure. This training scheme is able to model the visible hair strands depicted in an input image, while preserving the general 3D structure of hairstyles. We exploit this prior to create a Gaussian-splatting-based reconstruction method that creates hairstyles from one or more images. Qualitative and quantitative comparisons with existing reconstruction pipelines demonstrate the effectiveness and superior performance of our method for capturing detailed hair orientation, overall silhouette, and backside consistency. For additional results and code, please refer to https://im2haircut.is.tue.mpg.de.
Problem

Research questions and friction points this paper is trying to address.

Reconstructing 3D hair geometry from single photographs
Addressing lack of ground truth training data for hairstyles
Combining synthetic and real data for realistic hair simulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformer-based prior model for internal geometry
Combines real and synthetic data training
Gaussian-splatting reconstruction from single images
🔎 Similar Papers
No similar papers found.