Predicting Turn-Taking and Backchannel in Human-Machine Conversations Using Linguistic, Acoustic, and Visual Signals

📅 2025-05-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the scarcity of high-quality multimodal annotated data for turn-taking and backchannel prediction in human–computer dialogue, this paper introduces MM-F2F—the first large-scale, fine-grained multimodal face-to-face dialogue dataset (210 hours of video, 1.5M words, 20M frames), featuring frame-level annotations for both turn-taking and backchannel behaviors. We propose an end-to-end multimodal dynamic fusion framework comprising: (i) a novel automated multimodal acquisition pipeline; (ii) a modality-strongly-coupled modeling mechanism enabling flexible input from any subset of text, speech, and video modalities; and (iii) integrated self-supervised pretraining, cross-modal alignment, and temporal probabilistic prediction. Experiments demonstrate state-of-the-art performance, with absolute F1-score improvements of +10% for turn-taking prediction and +33% for backchannel prediction. The MM-F2F dataset and source code are publicly released.

Technology Category

Application Category

📝 Abstract
This paper addresses the gap in predicting turn-taking and backchannel actions in human-machine conversations using multi-modal signals (linguistic, acoustic, and visual). To overcome the limitation of existing datasets, we propose an automatic data collection pipeline that allows us to collect and annotate over 210 hours of human conversation videos. From this, we construct a Multi-Modal Face-to-Face (MM-F2F) human conversation dataset, including over 1.5M words and corresponding turn-taking and backchannel annotations from approximately 20M frames. Additionally, we present an end-to-end framework that predicts the probability of turn-taking and backchannel actions from multi-modal signals. The proposed model emphasizes the interrelation between modalities and supports any combination of text, audio, and video inputs, making it adaptable to a variety of realistic scenarios. Our experiments show that our approach achieves state-of-the-art performance on turn-taking and backchannel prediction tasks, achieving a 10% increase in F1-score on turn-taking and a 33% increase on backchannel prediction. Our dataset and code are publicly available online to ease of subsequent research.
Problem

Research questions and friction points this paper is trying to address.

Predicting turn-taking and backchannel in human-machine conversations
Overcoming dataset limitations with multi-modal signal collection
Developing an adaptable end-to-end framework for conversation analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automatic data collection pipeline for conversation videos
Multi-Modal Face-to-Face dataset with diverse annotations
End-to-end framework for multi-modal turn-taking prediction
🔎 Similar Papers
No similar papers found.
Y
Yuxin Lin
School of Informatics, Xiamen University
Yinglin Zheng
Yinglin Zheng
Xiamen University
Computer Vision
M
Ming Zeng
School of Informatics, Xiamen University
W
Wangzheng Shi
School of Informatics, Xiamen University