🤖 AI Summary
This study investigates how service robots can unobtrusively interpret multimodal social cues—speech, gesture, and gaze—while users perform primary tasks, enabling socially acceptable human–robot interaction. Focusing on a café small-talk scenario, we employed an augmented reality (AR) simulation experiment, multimodal behavioral coding, user interviews, and statistical modeling to systematically examine the synergistic modulation of robot embodiment (anthropomorphic, zoomorphic, or technical; ground- or aerial-based) and user conversational role (initiator vs. responder) on cue usage patterns. We first reveal that embodiment significantly influences spatial distribution of cues (e.g., gesture height), whereas conversational role governs combinatorial complexity of cue integration. We propose a novel interaction design framework integrating cognitive load, social context, and perceptual feasibility. Furthermore, we derive three rational principles guiding user cue selection and identify cross-intent generalizable cue patterns.
📝 Abstract
As social service robots become commonplace, it is essential for them to effectively interpret human signals, such as verbal, gesture, and eye gaze, when people need to focus on their primary tasks to minimize interruptions and distractions. Toward such a socially acceptable Human-Robot Interaction, we conducted a study ($N=24$) in an AR-simulated context of a coffee chat. Participants elicited social cues to signal intentions to an anthropomorphic, zoomorphic, grounded technical, or aerial technical robot waiter when they were speakers or listeners. Our findings reveal common patterns of social cues over intentions, the effects of robot morphology on social cue position and conversational role on social cue complexity, and users' rationale in choosing social cues. We offer insights into understanding social cues concerning perceptions of robots, cognitive load, and social context. Additionally, we discuss design considerations on approaching, social cue recognition, and response strategies for future service robots.