🤖 AI Summary
Existing gesture-based interaction in multi-robot systems suffers from rigid fixed-mapping schemes, poor recognition robustness, and limited expressive dimensionality. To address these issues, this paper proposes a role-playing metaphor-based gesture interaction framework, introducing three narrative roles—“Director,” “Puppeteer,” and “Wizard”—whose behavioral logics are tightly coupled with gesture semantics, thereby transcending conventional point-to-point control paradigms. The framework integrates dynamic gesture recognition, role-aware task allocation modeling, and hierarchical narrative structure design to enable multimodal, collaborative storytelling across multiple robots. Experimental evaluation demonstrates significant improvements: a 37% increase in user operational creativity and a 29% reduction in cognitive workload (measured via NASA-TLX). The framework broadens expressive capacity and contextual adaptability of multi-robot interaction in open-ended domains such as education and live performance.
📝 Abstract
Gestures are an expressive input modality for controlling multiple robots, but their use is often limited by rigid mappings and recognition constraints. To move beyond these limitations, we propose roleplaying metaphors as a scaffold for designing richer interactions. By introducing three roles: Director, Puppeteer, and Wizard, we demonstrate how narrative framing can guide the creation of diverse gesture sets and interaction styles. These roles enable a variety of scenarios, showing how roleplay can unlock new possibilities for multi-robot systems. Our approach emphasizes creativity, expressiveness, and intuitiveness as key elements for future human-robot interaction design.