TAVID: Text-Driven Audio-Visual Interactive Dialogue Generation

📅 2025-12-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods typically generate talking faces or conversational speech in isolation, neglecting the strong cross-modal coupling inherent in human dialogue. This work proposes the first joint audio-visual generation framework for natural two-person conversations, taking text and reference images as input to simultaneously synthesize interactive video and speech. Our core innovation is a bidirectional cross-modal mapping mechanism—comprising a motion mapper and a speaker mapper—that jointly models the coordinated speaker–listener dynamics for the first time. The framework integrates diffusion-based generation, cross-modal feature alignment, temporal synchronization constraints, and adversarial training. Extensive evaluation demonstrates state-of-the-art performance across four key dimensions: talking-face photorealism, listener responsiveness, inter-speaker interaction fluency, and speech quality—significantly outperforming unimodal or weakly coupled baselines.

Technology Category

Application Category

📝 Abstract
The objective of this paper is to jointly synthesize interactive videos and conversational speech from text and reference images. With the ultimate goal of building human-like conversational systems, recent studies have explored talking or listening head generation as well as conversational speech generation. However, these works are typically studied in isolation, overlooking the multimodal nature of human conversation, which involves tightly coupled audio-visual interactions. In this paper, we introduce TAVID, a unified framework that generates both interactive faces and conversational speech in a synchronized manner. TAVID integrates face and speech generation pipelines through two cross-modal mappers (i.e., a motion mapper and a speaker mapper), which enable bidirectional exchange of complementary information between the audio and visual modalities. We evaluate our system across four dimensions: talking face realism, listening head responsiveness, dyadic interaction fluency, and speech quality. Extensive experiments demonstrate the effectiveness of our approach across all these aspects.
Problem

Research questions and friction points this paper is trying to address.

Generates synchronized interactive videos and conversational speech from text
Unifies face and speech generation using cross-modal mappers
Addresses multimodal audio-visual interactions for human-like dialogue systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified framework for synchronized audio-visual dialogue generation
Cross-modal mappers enable bidirectional audio-visual information exchange
Jointly synthesizes interactive faces and conversational speech from text
🔎 Similar Papers
No similar papers found.