๐ค AI Summary
To address inaccurate lip synchronization and long-term pose drift in real-time audio-driven talking-head video generation, this paper proposes a real-time-optimized tailored flow matching framework. Methodologically, it integrates audio-feature conditioning, explicit pose modeling, and efficient inference optimization to enable end-to-end low-latency sequence generation. Its key contribution lies in a lightweight flow matching architecture that significantly improves visual naturalness and temporal stability while preserving high frame-level temporal coherence. Experiments on the HDTF dataset demonstrate a LipSync Confidence score of 8.50, an inference throughput of 141 FPS on a single A10 GPU, and an end-to-end latency of only 0.17 secondsโenabling high-fidelity virtual avatar deployment across diverse real-time scenarios.
๐ Abstract
We present Livatar, a real-time audio-driven talking heads videos generation framework. Existing baselines suffer from limited lip-sync accuracy and long-term pose drift. We address these limitations with a flow matching based framework. Coupled with system optimizations, Livatar achieves competitive lip-sync quality with a 8.50 LipSync Confidence on the HDTF dataset, and reaches a throughput of 141 FPS with an end-to-end latency of 0.17s on a single A10 GPU. This makes high-fidelity avatars accessible to broader applications. Our project is available at https://www.hedra.com/ with with examples at https://h-liu1997.github.io/Livatar-1/