🤖 AI Summary
This study addresses the limitations of traditional navigation systems in large indoor environments, which often monopolize users’ hands and visual attention, thereby hindering natural interaction. To overcome this, the authors propose a cross-entity handoff mechanism between a social robot and a wearable device that maintains user perceptual consistency through shared speech and state information, enabling seamless multi-device collaborative navigation. Evaluated within a conversational agent framework and validated through human-subject experiments, the approach did not significantly improve task performance; however, participants expressed a clear preference for the wearable interaction modality and found the conversational handoff engaging. This work offers novel insights and empirical support for designing continuous, multimodal interactions in embodied AI systems.
📝 Abstract
Navigating large and complex indoor environments, such as universities, airports, and hospitals, can be cognitively demanding and requires attention and effort. While mobile applications provide convenient navigation support, they occupy the user's hands and visual attention, limiting natural interaction. In this paper, we explore conversation hand-off as a method for multi-device indoor navigation, where a Conversational Agent (CA) transitions seamlessly from a stationary social robot to a wearable device. We evaluated robot-only, wearable-only, and robot-to-wearable hand-off in a university campus setting using a within-subjects design with N=24 participants. We find that conversation hand-off is experienced as engaging, even though no performance benefits were observed, and most preferred using the wearable-only system. Our findings suggest that the design of such re-embodied assistants should maintain a shared voice and state across embodiments. We demonstrate how conversational hand-offs can bridge cognitive and physical transitions, enriching human interaction with embodied AI.