SelfieAvatar: Real-time Head Avatar reenactntment from a Selfie Video

📅 2025-05-26
🏛️ IEEE International Conference on Automatic Face & Gesture Recognition
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods struggle to achieve real-time, high-fidelity reconstruction of animatable head avatars from a single selfie video, particularly in capturing non-facial regions, background context, and high-frequency details such as wrinkles and hair strands. This work proposes a novel framework that integrates 3D Morphable Models (3DMM) with a StyleGAN generator, employing a hybrid loss function to jointly optimize foreground reconstruction and avatar generation within an adversarial training paradigm. Requiring only a single selfie video, the approach enables high-quality avatar creation without extensive training data or complex inputs. It outperforms current state-of-the-art methods in both self-driven and cross-driven scenarios, significantly enhancing texture detail richness and overall visual fidelity.

Technology Category

Application Category

📝 Abstract
Head avatar reenactment focuses on creating animatable personal avatars from monocular videos, serving as a foundational element for applications like social signal understanding, gaming, human-machine interaction, and computer vision. Recent advances in 3D Morphable Model (3DMM)-based facial reconstruction methods have achieved remarkable high-fidelity face estimation. However, on the one hand, they struggle to capture the entire head, including non-facial regions and background details in real time, which is an essential aspect for producing realistic, high-fidelity head avatars. On the other hand, recent approaches leveraging generative adversarial networks (GANs) for head avatar generation from videos can achieve high-quality reenactments but encounter limitations in reproducing fine-grained head details, such as wrinkles and hair textures. In addition, existing methods generally rely on a large amount of training data, and rarely focus on using only a simple selfie video to achieve avatar reenactment. To address these challenges, this study introduces a method for detailed head avatar reenactment using a selfie video. The approach combines 3DMMs with a StyleGAN-based generator. A detailed reconstruction model is proposed, incorporating mixed loss functions for foreground reconstruction and avatar image generation during adversarial training to recover high-frequency details. Qualitative and quantitative evaluations on self-reenactment and cross-reenactment tasks demonstrate that the proposed method achieves superior head avatar reconstruction with rich and intricate textures compared to existing approaches.
Problem

Research questions and friction points this paper is trying to address.

head avatar reenactment
selfie video
high-fidelity reconstruction
fine-grained details
real-time generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Head Avatar Reenactment
3D Morphable Model (3DMM)
StyleGAN
Selfie Video
High-Frequency Detail Reconstruction
🔎 Similar Papers
No similar papers found.