๐ค AI Summary
Existing high-fidelity hair reconstruction methods based on 3D Gaussian splatting incur substantial storage and rendering overhead due to their reliance on millions of primitives. This work proposes a compact representation that clusters individual hair strands into representative hair cards sharing a common texture codebook, significantly improving efficiency. By integrating generative priors to accelerate geometric initialization, the method achieves comparable rendering quality while reducing reconstruction time by a factor of four and decreasing memory consumption by over two orders of magnitude. The approach thus enables efficient, high-fidelity hair modeling suitable for practical applications.
๐ Abstract
We present a compact pipeline for high-fidelity hair reconstruction from multi-view images. While recent 3D Gaussian Splatting (3DGS) methods achieve realistic results, they often require millions of primitives, leading to high storage and rendering costs. Observing that hair exhibits structural and visual similarities across a hairstyle, we cluster strands into representative hair cards and group these into shared texture codebooks. Our approach integrates this structure with 3DGS rendering, significantly reducing reconstruction time and storage while maintaining comparable visual quality. In addition, we propose a generative prior accelerated method to reconstruct the initial strand geometry from a set of images. Our experiments demonstrate a 4-fold reduction in strand reconstruction time and achieve comparable rendering performance with over 200x lower memory footprint.