🤖 AI Summary
Existing 3D conversational avatar generation models only model unidirectional speaking or listening behaviors, failing to capture the natural, dynamic role-switching inherent in face-to-face dialogue—resulting in rigid motions and discontinuous transitions. This work formally defines and addresses the novel task of multi-turn, two-party 3D speaker interaction generation. We propose a dual-agent co-generation paradigm: (1) curating the first 50-hour multi-turn, dual-role 3D dialogue dataset; (2) designing a spatiotemporally disentangled diffusion framework integrating role-aware motion encoders and cross-agent consistency constraints; and (3) jointly modeling lip synchronization (during speaking) and nonverbal feedback (during listening) under unified audio–pose–expression conditioning. Experiments demonstrate significant improvements over SOTA in naturalness, role consistency, and interaction fluency. Our method supports coherent long-horizon generation (>10 turns), and user studies report a 92% realism acceptance rate.
📝 Abstract
In face-to-face conversations, individuals need to switch between speaking and listening roles seamlessly. Existing 3D talking head generation models focus solely on speaking or listening, neglecting the natural dynamics of interactive conversation, which leads to unnatural interactions and awkward transitions. To address this issue, we propose a new task -- multi-round dual-speaker interaction for 3D talking head generation -- which requires models to handle and generate both speaking and listening behaviors in continuous conversation. To solve this task, we introduce DualTalk, a novel unified framework that integrates the dynamic behaviors of speakers and listeners to simulate realistic and coherent dialogue interactions. This framework not only synthesizes lifelike talking heads when speaking but also generates continuous and vivid non-verbal feedback when listening, effectively capturing the interplay between the roles. We also create a new dataset featuring 50 hours of multi-round conversations with over 1,000 characters, where participants continuously switch between speaking and listening roles. Extensive experiments demonstrate that our method significantly enhances the naturalness and expressiveness of 3D talking heads in dual-speaker conversations. We recommend watching the supplementary video: https://ziqiaopeng.github.io/dualtalk.