🤖 AI Summary
Existing robot imitation learning methods from human RGB videos suffer from kinematic discrepancies and interference from redundant motions, hindering end-to-end autonomous learning. This paper proposes the first end-to-end diffusion model framework for cross-morphological human-to-robot imitation: it directly maps a single RGB frame to a robot joint control sequence, without requiring predefined motion mappings or explicit pose estimation. We introduce the first human-robot mutual imitation video dataset, incorporate a joint configuration constraint mechanism to suppress spurious degrees of freedom, and enable joint perception-action modeling. In the RGB-to-joint-value generation task, our method significantly outperforms state-of-the-art approaches—achieving a 23.6% improvement in motion fidelity and a 31.4% reduction in cross-subject generalization error.
📝 Abstract
There has been substantial progress in humanoid robots, with new skills continuously being taught, ranging from navigation to manipulation. While these abilities may seem impressive, the teaching methods often remain inefficient. To enhance the process of teaching robots, we propose leveraging a mechanism effectively used by humans: teaching by demonstrating. In this paper, we introduce DIRIGENt (DIrect Robotic Imitation GENeration model), a novel end-to-end diffusion approach that directly generates joint values from observing human demonstrations, enabling a robot to imitate these actions without any existing mapping between it and humans. We create a dataset in which humans imitate a robot and then use this collected data to train a diffusion model that enables a robot to imitate humans. The following three aspects are the core of our contribution. First is our novel dataset with natural pairs between human and robot poses, allowing our approach to imitate humans accurately despite the gap between their anatomies. Second, the diffusion input to our model alleviates the challenge of redundant joint configurations, limiting the search space. And finally, our end-to-end architecture from perception to action leads to an improved learning capability. Through our experimental analysis, we show that combining these three aspects allows DIRIGENt to outperform existing state-of-the-art approaches in the field of generating joint values from RGB images.