🤖 AI Summary
Existing audio-conditioned latent diffusion models (LDMs) for lip synchronization tend to learn spurious visual-to-visual shortcuts, undermining genuine audio–visual cross-modal alignment and limiting synchronization accuracy. To address this, we propose StableSyncNet—a robust SyncNet-based discriminator architecture—and a novel Temporal Representation Alignment (TREPA) mechanism. StableSyncNet enhances training stability and discriminative fidelity under SyncNet supervision (achieving 94% accuracy on HDTF), while TREPA explicitly enforces inter-frame temporal consistency. Both components are seamlessly integrated into an audio-conditioned LDM framework. Extensive evaluations on HDTF and VoxCeleb2 demonstrate state-of-the-art performance: SyncNet accuracy improves by 3 percentage points over prior methods, with significant gains in lip-motion synchronization precision and video temporal coherence.
📝 Abstract
End-to-end audio-conditioned latent diffusion models (LDMs) have been widely adopted for audio-driven portrait animation, demonstrating their effectiveness in generating lifelike and high-resolution talking videos. However, direct application of audio-conditioned LDMs to lip-synchronization (lip-sync) tasks results in suboptimal lip-sync accuracy. Through an in-depth analysis, we identified the underlying cause as the"shortcut learning problem", wherein the model predominantly learns visual-visual shortcuts while neglecting the critical audio-visual correlations. To address this issue, we explored different approaches for integrating SyncNet supervision into audio-conditioned LDMs to explicitly enforce the learning of audio-visual correlations. Since the performance of SyncNet directly influences the lip-sync accuracy of the supervised model, the training of a well-converged SyncNet becomes crucial. We conducted the first comprehensive empirical studies to identify key factors affecting SyncNet convergence. Based on our analysis, we introduce StableSyncNet, with an architecture designed for stable convergence. Our StableSyncNet achieved a significant improvement in accuracy, increasing from 91% to 94% on the HDTF test set. Additionally, we introduce a novel Temporal Representation Alignment (TREPA) mechanism to enhance temporal consistency in the generated videos. Experimental results show that our method surpasses state-of-the-art lip-sync approaches across various evaluation metrics on the HDTF and VoxCeleb2 datasets.