GeomHair: Reconstruction of Hair Strands from Colorless 3D Scans

📅 2025-05-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
High-fidelity reconstruction of complex hair geometry from textureless 3D scan data remains challenging due to the absence of photometric cues. Method: We propose the first purely geometry-driven hair reconstruction framework, eliminating reliance on RGB color information. Our approach integrates multi-modal geometric orientation estimation, neural 2D strand detection–guided rendering analysis, a diffusion prior with refined noise scheduling, scan-specific textual prompt fine-tuning, and synergistic synthetic–real data training. Contribution/Results: Evaluated on the real-world Strands400 scan dataset, our method achieves accurate reconstruction across diverse hairstyles—from simple to highly entangled—outperforming existing RGB-dependent methods by a significant margin. To our knowledge, this is the first work to enable high-quality, photorealistic hair modeling solely from uncolored geometric inputs. It establishes a novel paradigm for realistic hair synthesis in digital humans, character animation, and AR/VR applications.

Technology Category

Application Category

📝 Abstract
We propose a novel method that reconstructs hair strands directly from colorless 3D scans by leveraging multi-modal hair orientation extraction. Hair strand reconstruction is a fundamental problem in computer vision and graphics that can be used for high-fidelity digital avatar synthesis, animation, and AR/VR applications. However, accurately recovering hair strands from raw scan data remains challenging due to human hair's complex and fine-grained structure. Existing methods typically rely on RGB captures, which can be sensitive to the environment and can be a challenging domain for extracting the orientation of guiding strands, especially in the case of challenging hairstyles. To reconstruct the hair purely from the observed geometry, our method finds sharp surface features directly on the scan and estimates strand orientation through a neural 2D line detector applied to the renderings of scan shading. Additionally, we incorporate a diffusion prior trained on a diverse set of synthetic hair scans, refined with an improved noise schedule, and adapted to the reconstructed contents via a scan-specific text prompt. We demonstrate that this combination of supervision signals enables accurate reconstruction of both simple and intricate hairstyles without relying on color information. To facilitate further research, we introduce Strands400, the largest publicly available dataset of hair strands with detailed surface geometry extracted from real-world data, which contains reconstructed hair strands from the scans of 400 subjects.
Problem

Research questions and friction points this paper is trying to address.

Reconstructs hair strands from colorless 3D scans
Overcomes challenges in recovering fine-grained hair structure
Eliminates reliance on RGB data for strand orientation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages multi-modal hair orientation extraction
Uses neural 2D line detector for strand orientation
Incorporates diffusion prior trained on synthetic scans
🔎 Similar Papers
No similar papers found.