π€ AI Summary
Existing audio-driven portrait animation methods achieve high visual quality but incur substantial computational overhead, hindering real-time deployment under strict latency and memory constraints. This paper proposes a lightweight video diffusion Transformer framework that generates high-fidelity talking-head videos with low latency in a compact latent space. Our key contributions are: (1) a hybrid attention mechanism that enhances fine-grained audio-visual alignment; (2) a static-dynamic training and inference paradigm that eliminates the need for explicit motion supervision while mitigating long-term temporal drift; and (3) spatiotemporal feature disentanglement coupled with high-dimensional representation compression, jointly optimizing lip-sync accuracy, visual fidelity, and temporal coherence. Extensive experiments demonstrate state-of-the-art performance across multiple benchmarks, with end-to-end real-time inference capability on standard hardware.
π Abstract
Audio-driven portrait animation aims to synthesize realistic and natural talking head videos from an input audio signal and a single reference image. While existing methods achieve high-quality results by leveraging high-dimensional intermediate representations and explicitly modeling motion dynamics, their computational complexity renders them unsuitable for real-time deployment. Real-time inference imposes stringent latency and memory constraints, often necessitating the use of highly compressed latent representations. However, operating in such compact spaces hinders the preservation of fine-grained spatiotemporal details, thereby complicating audio-visual synchronization RAP (Real-time Audio-driven Portrait animation), a unified framework for generating high-quality talking portraits under real-time constraints. Specifically, RAP introduces a hybrid attention mechanism for fine-grained audio control, and a static-dynamic training-inference paradigm that avoids explicit motion supervision. Through these techniques, RAP achieves precise audio-driven control, mitigates long-term temporal drift, and maintains high visual fidelity. Extensive experiments demonstrate that RAP achieves state-of-the-art performance while operating under real-time constraints.