RAP: Real-time Audio-driven Portrait Animation with Video Diffusion Transformer

πŸ“… 2025-08-07
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing audio-driven portrait animation methods achieve high visual quality but incur substantial computational overhead, hindering real-time deployment under strict latency and memory constraints. This paper proposes a lightweight video diffusion Transformer framework that generates high-fidelity talking-head videos with low latency in a compact latent space. Our key contributions are: (1) a hybrid attention mechanism that enhances fine-grained audio-visual alignment; (2) a static-dynamic training and inference paradigm that eliminates the need for explicit motion supervision while mitigating long-term temporal drift; and (3) spatiotemporal feature disentanglement coupled with high-dimensional representation compression, jointly optimizing lip-sync accuracy, visual fidelity, and temporal coherence. Extensive experiments demonstrate state-of-the-art performance across multiple benchmarks, with end-to-end real-time inference capability on standard hardware.

Technology Category

Application Category

πŸ“ Abstract
Audio-driven portrait animation aims to synthesize realistic and natural talking head videos from an input audio signal and a single reference image. While existing methods achieve high-quality results by leveraging high-dimensional intermediate representations and explicitly modeling motion dynamics, their computational complexity renders them unsuitable for real-time deployment. Real-time inference imposes stringent latency and memory constraints, often necessitating the use of highly compressed latent representations. However, operating in such compact spaces hinders the preservation of fine-grained spatiotemporal details, thereby complicating audio-visual synchronization RAP (Real-time Audio-driven Portrait animation), a unified framework for generating high-quality talking portraits under real-time constraints. Specifically, RAP introduces a hybrid attention mechanism for fine-grained audio control, and a static-dynamic training-inference paradigm that avoids explicit motion supervision. Through these techniques, RAP achieves precise audio-driven control, mitigates long-term temporal drift, and maintains high visual fidelity. Extensive experiments demonstrate that RAP achieves state-of-the-art performance while operating under real-time constraints.
Problem

Research questions and friction points this paper is trying to address.

Real-time audio-driven portrait animation synthesis
Balancing quality and computational efficiency
Preserving fine-grained details in compressed representations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid attention mechanism for audio control
Static-dynamic training-inference paradigm
Video Diffusion Transformer for real-time animation
F
Fangyu Du
Soul AI
T
Taiqing Li
Soul AI
Z
Ziwei Zhang
Soul AI
Q
Qian Qiao
Soul AI
Tan Yu
Tan Yu
NVIDIA
LLMRAGCross-modal searchadvertisingvision backbone
Dingcheng Zhen
Dingcheng Zhen
SoulApp.com
LLMComputer visionMulti-modalAIGC
Xu Jia
Xu Jia
Associate Professor at Dalian University of Technology
Computer VisionMachine LearningBio-Inspired Vision
Y
Yang Yang
Xi’an Jiaotong University
S
Shunshun Yin
Soul AI
S
Siyuan Liu
Soul AI