🤖 AI Summary
Existing multimodal dialogue generation approaches struggle to achieve fine-grained alignment and controllable expression across speech, vision, and text. This work proposes a conditionally controllable multimodal dialogue generation framework grounded in the natural interaction patterns of human communication. By constructing a high-quality annotation pipeline leveraging cinematic data, we introduce MM-Dia—the first multimodal dialogue dataset supporting style-controllable spoken dialogue synthesis—and establish MM-Dia-Bench, a benchmark for evaluating cross-modal style consistency. Experimental results demonstrate that the proposed method significantly enhances fine-grained controllability in generated dialogues. Furthermore, evaluations using MM-Dia-Bench reveal substantial gaps in current models’ ability to replicate the expressive richness characteristic of human multimodal interaction.
📝 Abstract
The recent advancement of Artificial Intelligence Generated Content (AIGC) has led to significant strides in modeling human interaction, particularly in the context of multimodal dialogue. While current methods impressively generate realistic dialogue in isolated modalities like speech or vision, challenges remain in controllable Multimodal Dialogue Generation (MDG). This paper focuses on the natural alignment between speech, vision, and text in human interaction, aiming for expressive dialogue generation through multimodal conditional control. To address the insufficient richness and diversity of dialogue expressiveness in existing datasets, we introduce a novel multimodal dialogue annotation pipeline to curate dialogues from movies and TV series with fine-grained annotations in interactional characteristics. The resulting MM-Dia dataset (360+ hours, 54,700 dialogues) facilitates explicitly controlled MDG, specifically through style-controllable dialogue speech synthesis. In parallel, MM-Dia-Bench (309 highly expressive dialogues with visible single-/dual-speaker scenes) serves as a rigorous testbed for implicit cross-modal MDG control, evaluating audio-visual style consistency across modalities. Extensive experiments demonstrate that training on MM-Dia significantly enhances fine-grained controllability, while evaluations on MM-Dia-Bench reveal limitations in current frameworks to replicate the nuanced expressiveness of human interaction. These findings provides new insights and challenges for multimodal conditional dialogue generation.