Development and Validation of Engagement and Rapport Scales for Evaluating User Experience in Multimodal Dialogue Systems

📅 2025-05-20
📈 Citations: 3
✹ Influential: 0
📄 PDF
đŸ€– AI Summary
Existing multimodal foreign language learning dialogue systems lack validated instruments for assessing user experience. Method: This study develops and validates a dual-dimensional scale measuring user engagement and rapport, integrating theories from educational psychology, social psychology, and second language acquisition to uniquely distinguish human tutor versus AI agent experiences. Rigorous psychometric evaluation—including Cronbach’s α analysis, confirmatory factor analysis (CFA), and human–AI comparative experiments—was conducted. Contribution/Results: The scale demonstrates excellent reliability (α > 0.90) and construct validity (CFI > 0.95). Empirical findings reveal systematic differences between human and AI interactions across subdimensions—including task focus, affective responsiveness, and trust formation—highlighting critical design implications. The validated instrument provides a reusable theoretical framework and measurement benchmark for iterative UX optimization and evaluation of multimodal educational dialogue systems.

Technology Category

Application Category

📝 Abstract
This study aimed to develop and validate two scales of engagement and rapport to evaluate the user experience quality with multimodal dialogue systems in the context of foreign language learning. The scales were designed based on theories of engagement in educational psychology, social psychology, and second language acquisition.Seventy-four Japanese learners of English completed roleplay and discussion tasks with trained human tutors and a dialog agent. After each dialogic task was completed, they responded to the scales of engagement and rapport. The validity and reliability of the scales were investigated through two analyses. We first conducted analysis of Cronbach's alpha coefficient and a series of confirmatory factor analyses to test the structural validity of the scales and the reliability of our designed items. We then compared the scores of engagement and rapport between the dialogue with human tutors and the one with a dialogue agent. The results revealed that our scales succeeded in capturing the difference in the dialogue experience quality between the human interlocutors and the dialogue agent from multiple perspectives.
Problem

Research questions and friction points this paper is trying to address.

Develop scales to assess engagement and rapport in multimodal dialogue systems
Validate scales for evaluating user experience in language learning contexts
Compare dialogue quality between human tutors and dialog agents
Innovation

Methods, ideas, or system contributions that make the work stand out.

Developed engagement and rapport evaluation scales
Validated scales with Cronbach's alpha analysis
Compared human and agent dialogue experiences
🔎 Similar Papers
No similar papers found.
F
Fuma Kurata
Waseda University, Tokyo, Japan
M
Mao Saeki
M
Masaki Eguchi
S
Shungo Suzuki
H
Hiroaki Takatsu
Yoichi Matsuyama
Yoichi Matsuyama
Associate Research Professor, Waseda University
Human-Robot InteractionDialogue SystemsCognitive Science