Avatar Forcing: Real-Time Interactive Head Avatar Generation for Natural Conversation

📅 2026-01-02
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of generating expressive digital avatars capable of real-time, bidirectional interaction with users, a task hindered by existing methods’ inability to respond with low latency to both speech and nonverbal cues such as nodding or laughter. The authors propose Avatar Forcing, a novel framework that introduces diffusion forcing for the first time to enable real-time avatar generation under causal constraints, effectively fusing audio and motion modalities. Additionally, they devise a label-free direct preference optimization approach that leverages synthetically generated negative samples to enhance interactive expressiveness. The implemented system achieves an end-to-end latency of approximately 500 ms—6.8× faster than baseline methods—and generates motions that significantly outperform alternatives in user preference studies, securing over 80% of favorable votes.

Technology Category

Application Category

📝 Abstract
Talking head generation creates lifelike avatars from static portraits for virtual communication and content creation. However, current models do not yet convey the feeling of truly interactive communication, often generating one-way responses that lack emotional engagement. We identify two key challenges toward truly interactive avatars: generating motion in real-time under causal constraints and learning expressive, vibrant reactions without additional labeled data. To address these challenges, we propose Avatar Forcing, a new framework for interactive head avatar generation that models real-time user-avatar interactions through diffusion forcing. This design allows the avatar to process real-time multimodal inputs, including the user's audio and motion, with low latency for instant reactions to both verbal and non-verbal cues such as speech, nods, and laughter. Furthermore, we introduce a direct preference optimization method that leverages synthetic losing samples constructed by dropping user conditions, enabling label-free learning of expressive interaction. Experimental results demonstrate that our framework enables real-time interaction with low latency (approximately 500ms), achieving 6.8X speedup compared to the baseline, and produces reactive and expressive avatar motion, which is preferred over 80% against the baseline.
Problem

Research questions and friction points this paper is trying to address.

interactive avatar
real-time generation
talking head
emotional engagement
causal constraints
Innovation

Methods, ideas, or system contributions that make the work stand out.

Avatar Forcing
diffusion forcing
real-time interaction
label-free preference optimization
talking head generation
🔎 Similar Papers
No similar papers found.