๐ค AI Summary
This work addresses the challenge of efficiently rendering high-fidelity, animatable 3D facial avatars within traditional graphics pipelines. By integrating neural radiance fields with a parametric face model, the method learns a radiance manifold in 3D space during a registration phase and produces an explicit hierarchical mesh along with corresponding appearance and deformation textures. At deployment, it enables efficient animation and real-time rendering through linear blend skinning and alpha compositing on a static mesh. This approach achieves, for the first time, a seamless transfer of radiance fieldโbased photorealistic 3D faces into classical graphics pipelines, operating efficiently on legacy hardware without requiring custom engines or specialized hardware, while supporting online streaming.
๐ Abstract
We introduce a novel representation for efficient classical rendering of photorealistic 3D face avatars. Leveraging recent advances in radiance fields anchored to parametric face models, our approach achieves controllable volumetric rendering of complex facial features, including hair, skin, and eyes. At enrollment time, we learn a set of radiance manifolds in 3D space to extract an explicit layered mesh, along with appearance and warp textures. During deployment, this allows us to control and animate the face through simple linear blending and alpha compositing of textures over a static mesh. This explicit representation also enables the generated avatar to be efficiently streamed online and then rendered using classical mesh and shader-based rendering on legacy graphics platforms, eliminating the need for any custom engineering or integration.