🤖 AI Summary
Existing speech-driven facial video generation methods suffer from limited expressiveness of hand-crafted intermediate representations (e.g., 3DMM, landmarks) and error propagation from external pre-trained models, hindering high-fidelity synthesis. To address this, we propose an end-to-end latent-space modeling paradigm: (1) we replace explicit geometric representations with a diffusion autoencoder (DAE) to learn a compact, data-driven latent space directly from images; (2) we design a Conformer-based speech encoder that maps audio into temporally coherent latent sequences within this space; and (3) we introduce implicit pose modeling and a DDIM-based single-frame decoding mechanism to ensure pose controllability and long-sequence temporal coherence. To our knowledge, this is the first work to integrate DAEs into speech-driven facial video generation. Extensive experiments demonstrate state-of-the-art performance in lip-sync accuracy, visual fidelity, and naturalness of head motion. Ablation studies validate the efficacy of each component.
📝 Abstract
While recent research has made significant progress in speech-driven talking face generation, the quality of the generated video still lags behind that of real recordings. One reason for this is the use of handcrafted intermediate representations like facial landmarks and 3DMM coefficients, which are designed based on human knowledge and are insufficient to precisely describe facial movements. Additionally, these methods require an external pretrained model for extracting these representations, whose performance sets an upper bound on talking face generation. To address these limitations, we propose a novel method called DAE-Talker that leverages data-driven latent representations obtained from a diffusion autoencoder (DAE). DAE contains an image encoder that encodes an image into a latent vector and a DDIM-based image decoder that reconstructs the image from it. We train our DAE on talking face video frames and then extract their latent representations as the training target for a Conformer-based speech2latent model. During inference, DAE-Talker first predicts the latents from speech and then generates the video frames with the image decoder in DAE from the predicted latents. This allows DAE-Talker to synthesize full video frames and produce natural head movements that align with the content of speech, rather than relying on a predetermined head pose from a template video. We also introduce pose modelling in speech2latent for pose controllability. Additionally, we propose a novel method for generating continuous video frames with the DDIM-based image decoder trained on individual frames, eliminating the need for modelling the joint distribution of consecutive frames directly. Our experiments show that DAE-Talker outperforms existing popular methods in lip-sync, video fidelity, and pose naturalness. We also conduct ablation studies to analyze the effectiveness of the proposed techniques and demonstrate the pose controllability of DAE-Talker.