🤖 AI Summary
Existing diffusion models suffer from slow inference due to multi-step denoising and full-image attention, hindering real-time speech-driven video generation. To address this, we propose a pose-aware video distillation framework. First, we design an input-aware sparse attention mechanism guided by human pose keypoints, focusing computation on facial and hand regions to improve motion coherence. Second, we introduce a pose-conditioned distillation loss to enhance lip-sync accuracy and gesture realism. Third, we employ knowledge distillation to compress a multi-step teacher diffusion model into a computationally efficient few-step student model. Our method achieves real-time inference speed while preserving high visual fidelity. Quantitative and qualitative evaluations demonstrate superior performance over state-of-the-art audio- and input-driven methods in lip-sync precision, hand-motion naturalness, and temporal consistency.
📝 Abstract
Diffusion models can synthesize realistic co-speech video from audio for various applications, such as video creation and virtual agents. However, existing diffusion-based methods are slow due to numerous denoising steps and costly attention mechanisms, preventing real-time deployment. In this work, we distill a many-step diffusion video model into a few-step student model. Unfortunately, directly applying recent diffusion distillation methods degrades video quality and falls short of real-time performance. To address these issues, our new video distillation method leverages input human pose conditioning for both attention and loss functions. We first propose using accurate correspondence between input human pose keypoints to guide attention to relevant regions, such as the speaker's face, hands, and upper body. This input-aware sparse attention reduces redundant computations and strengthens temporal correspondences of body parts, improving inference efficiency and motion coherence. To further enhance visual quality, we introduce an input-aware distillation loss that improves lip synchronization and hand motion realism. By integrating our input-aware sparse attention and distillation loss, our method achieves real-time performance with improved visual quality compared to recent audio-driven and input-driven methods. We also conduct extensive experiments showing the effectiveness of our algorithmic design choices.