Multimodal Conversation Structure Understanding

📅 2025-05-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) lack fine-grained structural understanding—such as speaker/addressee/bystander role attribution and utterance thread modeling—in multimodal multiparty dialogues. Method: We introduce MMDU, the first benchmark for this task, comprising 4,398+ human-annotated audiovisual dialogue segments. We systematically define and annotate three types of fine-grained roles and thread structures in multimodal dialogue, and design an evaluation protocol grounded in conversation analysis and sociolinguistic theory to compare audio-visual LLMs (AV-LLMs) against vision-language models (VLMs). Contribution/Results: AV-LLMs consistently outperform VLMs in role identification, yet anonymization causes significant performance degradation. Participant count exhibits the strongest negative correlation with role attribution accuracy. This work establishes a novel benchmark, a theoretically grounded framework, and actionable insights for improving robustness in multimodal dialogue structural modeling.

Technology Category

Application Category

📝 Abstract
Conversations are usually structured by roles -- who is speaking, who's being addressed, and who's listening -- and unfold in threads that break with changes in speaker floor or topical focus. While large language models (LLMs) have shown incredible capabilities in dialogue and reasoning, their ability to understand fine-grained conversational structure, especially in multi-modal, multi-party settings, remains underexplored. To address this gap, we introduce a suite of tasks focused on conversational role attribution (speaker, addressees, side-participants) and conversation threading (utterance linking and clustering), drawing on conversation analysis and sociolinguistics. To support those tasks, we present a human annotated dataset of 4,398 annotations for speakers and reply-to relationship, 5,755 addressees, and 3,142 side-participants. We evaluate popular audio-visual LLMs and vision-language models on our dataset, and our experimental results suggest that multimodal conversational structure understanding remains challenging. The most performant audio-visual LLM outperforms all vision-language models across all metrics, especially in speaker and addressee recognition. However, its performance drops significantly when conversation participants are anonymized. The number of conversation participants in a clip is the strongest negative predictor of role-attribution performance, while acoustic clarity (measured by pitch and spectral centroid) and detected face coverage yield positive associations. We hope this work lays the groundwork for future evaluation and development of multimodal LLMs that can reason more effectively about conversation structure.
Problem

Research questions and friction points this paper is trying to address.

Understanding fine-grained conversational structure in multimodal settings
Improving role attribution and threading in multi-party conversations
Evaluating multimodal LLMs on speaker and addressee recognition tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal LLMs for conversational structure analysis
Human annotated dataset for role attribution
Evaluation of audio-visual models on threading tasks
🔎 Similar Papers
No similar papers found.
K
Kent K. Chang
University of California, Berkeley
M
Mackenzie Hanh Cramer
University of California, Berkeley
A
Anna Ho
University of California, Berkeley
T
Ti Ti Nguyen
University of California, Berkeley
Y
Yilin Yuan
University of California, Berkeley
David Bamman
David Bamman
UC Berkeley
Natural Language ProcessingMachine LearningDigital HumanitiesComputational Social Science