Input-Aware Sparse Attention for Real-Time Co-Speech Video Generation

📅 2025-10-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing diffusion models suffer from slow inference due to multi-step denoising and full-image attention, hindering real-time speech-driven video generation. To address this, we propose a pose-aware video distillation framework. First, we design an input-aware sparse attention mechanism guided by human pose keypoints, focusing computation on facial and hand regions to improve motion coherence. Second, we introduce a pose-conditioned distillation loss to enhance lip-sync accuracy and gesture realism. Third, we employ knowledge distillation to compress a multi-step teacher diffusion model into a computationally efficient few-step student model. Our method achieves real-time inference speed while preserving high visual fidelity. Quantitative and qualitative evaluations demonstrate superior performance over state-of-the-art audio- and input-driven methods in lip-sync precision, hand-motion naturalness, and temporal consistency.

Technology Category

Application Category

📝 Abstract
Diffusion models can synthesize realistic co-speech video from audio for various applications, such as video creation and virtual agents. However, existing diffusion-based methods are slow due to numerous denoising steps and costly attention mechanisms, preventing real-time deployment. In this work, we distill a many-step diffusion video model into a few-step student model. Unfortunately, directly applying recent diffusion distillation methods degrades video quality and falls short of real-time performance. To address these issues, our new video distillation method leverages input human pose conditioning for both attention and loss functions. We first propose using accurate correspondence between input human pose keypoints to guide attention to relevant regions, such as the speaker's face, hands, and upper body. This input-aware sparse attention reduces redundant computations and strengthens temporal correspondences of body parts, improving inference efficiency and motion coherence. To further enhance visual quality, we introduce an input-aware distillation loss that improves lip synchronization and hand motion realism. By integrating our input-aware sparse attention and distillation loss, our method achieves real-time performance with improved visual quality compared to recent audio-driven and input-driven methods. We also conduct extensive experiments showing the effectiveness of our algorithmic design choices.
Problem

Research questions and friction points this paper is trying to address.

Achieving real-time co-speech video generation from audio input
Reducing computational costs of diffusion models for video synthesis
Improving visual quality and motion coherence in generated videos
Innovation

Methods, ideas, or system contributions that make the work stand out.

Input-aware sparse attention reduces redundant computations
Input-aware distillation loss improves lip sync realism
Few-step student model achieves real-time video generation
🔎 Similar Papers
No similar papers found.
B
Beijia Lu
Carnegie Mellon University, U.S.A
Z
Ziyi Chen
PAII Inc., U.S.A
J
Jing Xiao
PAII Inc., U.S.A
Jun-Yan Zhu
Jun-Yan Zhu
Assistant Professor, Carnegie Mellon University
Computer VisionComputer GraphicsGenerative ModelsComputational Photography