🤖 AI Summary
Existing methods typically generate talking faces or conversational speech in isolation, neglecting the strong cross-modal coupling inherent in human dialogue. This work proposes the first joint audio-visual generation framework for natural two-person conversations, taking text and reference images as input to simultaneously synthesize interactive video and speech. Our core innovation is a bidirectional cross-modal mapping mechanism—comprising a motion mapper and a speaker mapper—that jointly models the coordinated speaker–listener dynamics for the first time. The framework integrates diffusion-based generation, cross-modal feature alignment, temporal synchronization constraints, and adversarial training. Extensive evaluation demonstrates state-of-the-art performance across four key dimensions: talking-face photorealism, listener responsiveness, inter-speaker interaction fluency, and speech quality—significantly outperforming unimodal or weakly coupled baselines.
📝 Abstract
The objective of this paper is to jointly synthesize interactive videos and conversational speech from text and reference images. With the ultimate goal of building human-like conversational systems, recent studies have explored talking or listening head generation as well as conversational speech generation. However, these works are typically studied in isolation, overlooking the multimodal nature of human conversation, which involves tightly coupled audio-visual interactions. In this paper, we introduce TAVID, a unified framework that generates both interactive faces and conversational speech in a synchronized manner. TAVID integrates face and speech generation pipelines through two cross-modal mappers (i.e., a motion mapper and a speaker mapper), which enable bidirectional exchange of complementary information between the audio and visual modalities. We evaluate our system across four dimensions: talking face realism, listening head responsiveness, dyadic interaction fluency, and speech quality. Extensive experiments demonstrate the effectiveness of our approach across all these aspects.