Signaling Human Intentions to Service Robots: Understanding the Use of Social Cues during In-Person Conversations

📅 2025-04-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how service robots can unobtrusively interpret multimodal social cues—speech, gesture, and gaze—while users perform primary tasks, enabling socially acceptable human–robot interaction. Focusing on a café small-talk scenario, we employed an augmented reality (AR) simulation experiment, multimodal behavioral coding, user interviews, and statistical modeling to systematically examine the synergistic modulation of robot embodiment (anthropomorphic, zoomorphic, or technical; ground- or aerial-based) and user conversational role (initiator vs. responder) on cue usage patterns. We first reveal that embodiment significantly influences spatial distribution of cues (e.g., gesture height), whereas conversational role governs combinatorial complexity of cue integration. We propose a novel interaction design framework integrating cognitive load, social context, and perceptual feasibility. Furthermore, we derive three rational principles guiding user cue selection and identify cross-intent generalizable cue patterns.

Technology Category

Application Category

📝 Abstract
As social service robots become commonplace, it is essential for them to effectively interpret human signals, such as verbal, gesture, and eye gaze, when people need to focus on their primary tasks to minimize interruptions and distractions. Toward such a socially acceptable Human-Robot Interaction, we conducted a study ($N=24$) in an AR-simulated context of a coffee chat. Participants elicited social cues to signal intentions to an anthropomorphic, zoomorphic, grounded technical, or aerial technical robot waiter when they were speakers or listeners. Our findings reveal common patterns of social cues over intentions, the effects of robot morphology on social cue position and conversational role on social cue complexity, and users' rationale in choosing social cues. We offer insights into understanding social cues concerning perceptions of robots, cognitive load, and social context. Additionally, we discuss design considerations on approaching, social cue recognition, and response strategies for future service robots.
Problem

Research questions and friction points this paper is trying to address.

Understanding human social cues for robot interaction
Effects of robot design on social cue interpretation
Improving service robots' response to human intentions
Innovation

Methods, ideas, or system contributions that make the work stand out.

AR-simulated context for human-robot interaction study
Analyzed social cues across diverse robot morphologies
Explored user rationale for social cue selection
🔎 Similar Papers
No similar papers found.
Hanfang Lyu
Hanfang Lyu
Hong Kong University of Science and Technology
X
Xiaoyu Wang
The Hong Kong University of Science and Technology, Hong Kong, China
Nandi Zhang
Nandi Zhang
Department of Computer Science, University of Calgary
Human-Computer InteractionMixed RealityHuman-Robot Interaction
S
Shuai Ma
The Hong Kong University of Science and Technology, Hong Kong, China
Q
Qian Zhu
Renmin University of China, Beijing, China
Yuhan Luo
Yuhan Luo
Assistant Professor, City University of Hong Kong
Human-Computer InteractionHealth InformaticsUbiquitous computingPersonal Informatics
F
Fugee Tsung
The Hong Kong University of Science and Technology, Hong Kong, China; The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, Guangdong, China
Xiaojuan Ma
Xiaojuan Ma
Hong Kong University of Science and Technology
Human-Computer InteractionHuman-Engaged ComputingAffective Computing