🤖 AI Summary
To address challenges in audio-driven talking-head video generation—including poor cross-modal temporal alignment, weak portrait identity consistency, and high computational overhead—this paper proposes a modular spatiotemporal attention diffusion Transformer. Methodologically: (1) it introduces a novel dual-path fusion mechanism—Symbiotic Fusion for preserving speaker identity stability and Direct Fusion for ensuring diverse, speech-action synchronized motion generation; (2) it jointly models image, audio, and temporal priors in the latent space under conditional guidance. Experiments demonstrate that the method achieves state-of-the-art performance across multiple benchmarks, significantly improving lip-sync accuracy and temporal coherence while maintaining strong identity fidelity and natural facial expressiveness. Moreover, it attains higher inference efficiency compared to mainstream diffusion-based approaches.
📝 Abstract
Portrait image animation using audio has rapidly advanced, enabling the creation of increasingly realistic and expressive animated faces. The challenges of this multimodality-guided video generation task involve fusing various modalities while ensuring consistency in timing and portrait. We further seek to produce vivid talking heads. To address these challenges, we present LetsTalk (LatEnt Diffusion TranSformer for Talking Video Synthesis), a diffusion transformer that incorporates modular temporal and spatial attention mechanisms to merge multimodality and enhance spatial-temporal consistency. To handle multimodal conditions, we first summarize three fusion schemes, ranging from shallow to deep fusion compactness, and thoroughly explore their impact and applicability. Then we propose a suitable solution according to the modality differences of image, audio, and video generation. For portrait, we utilize a deep fusion scheme (Symbiotic Fusion) to ensure portrait consistency. For audio, we implement a shallow fusion scheme (Direct Fusion) to achieve audio-animation alignment while preserving diversity. Our extensive experiments demonstrate that our approach generates temporally coherent and realistic videos with enhanced diversity and liveliness.