π€ AI Summary
This work addresses the problem of inaccurate and unnatural gesturing by embodied agents (e.g., robots) during natural deictic communication with humans in physical environments. We propose a gesture generation framework that integrates imitation learning (IL) with hierarchical reinforcement learning (HRL), jointly modeling motion control policies and referential semantics using only small-scale motion-capture dataβthus ensuring both physical plausibility and deictic precision. In human-subject evaluations within a virtual reality deictic task, our method significantly outperforms purely supervised baselines, improving referential accuracy by +12.3% and achieving statistically significant gains in perceived naturalness (p < 0.01). It has been successfully deployed on a real robotic platform, demonstrating practical viability. The core contribution is the first integration of hierarchical RL into an IL pipeline, enabling high-fidelity deictic gesture synthesis under low-data regimes.
π Abstract
One of the main goals of robotics and intelligent agent research is to enable natural communication with humans in physically situated settings. While recent work has focused on verbal modes such as language and speech, non-verbal communication is crucial for flexible interaction. We present a framework for generating pointing gestures in embodied agents by combining imitation and reinforcement learning. Using a small motion capture dataset, our method learns a motor control policy that produces physically valid, naturalistic gestures with high referential accuracy. We evaluate the approach against supervised learning and retrieval baselines in both objective metrics and a virtual reality referential game with human users. Results show that our system achieves higher naturalness and accuracy than state-of-the-art supervised models, highlighting the promise of imitation-RL for communicative gesture generation and its potential application to robots.