3D Engine-ready Photorealistic Avatars via Dynamic Textures

📅 2025-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing 3D portrait reconstruction methods rely on costly hardware or implicit representations (e.g., NeRF), limiting compatibility with industrial rendering pipelines such as those in real-time game engines. This work proposes an explicit, consumer-grade 3D virtual portrait generation framework that reconstructs high-fidelity mesh models and synthesizes dynamic, lighting-adaptive textures from only a few input images. Our core contribution is the novel “dynamic texture-driven” paradigm, which jointly optimizes differentiable rendering, neural texture synthesis, classical mesh refinement, and dynamic UV mapping—yielding explicit geometry and texture representations fully compliant with standard PBR shading and real-time rasterization. The method achieves 60 FPS rendering on a single GPU, attaining visual fidelity comparable to NeRF-based approaches. It has been successfully integrated into Unity and Unreal Engine, enabling real-time interactive control over facial expressions, illumination, and viewpoint.

Technology Category

Application Category

📝 Abstract
As the digital and physical worlds become more intertwined, there has been a lot of interest in digital avatars that closely resemble their real-world counterparts. Current digitization methods used in 3D production pipelines require costly capture setups, making them impractical for mass usage among common consumers. Recent academic literature has found success in reconstructing humans from limited data using implicit representations (e.g., voxels used in NeRFs), which are able to produce impressive videos. However, these methods are incompatible with traditional rendering pipelines, making it difficult to use them in applications such as games. In this work, we propose an end-to-end pipeline that builds explicitly-represented photorealistic 3D avatars using standard 3D assets. Our key idea is the use of dynamically-generated textures to enhance the realism and visually mask deficiencies in the underlying mesh geometry. This allows for seamless integration with current graphics pipelines while achieving comparable visual quality to state-of-the-art 3D avatar generation methods.
Problem

Research questions and friction points this paper is trying to address.

Create 3D photorealistic avatars for mass consumer use
Overcome limitations of costly 3D digitization methods
Enable compatibility with traditional rendering pipelines
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic textures enhance 3D avatar realism.
End-to-end pipeline uses standard 3D assets.
Seamless integration with current graphics pipelines.
🔎 Similar Papers
No similar papers found.
Y
Yifan Wang
Samsung Research America
I
Ivan Molodetskikh
Lomonosov Moscow State University
O
Ondrej Texler
Samsung Research America
Dimitar Dinev
Dimitar Dinev
Pipio
Computer GraphicsComputer VisionDigital Humans