DIRIGENt: End-To-End Robotic Imitation of Human Demonstrations Based on a Diffusion Model

📅 2025-01-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing robot imitation learning methods from human RGB videos suffer from kinematic discrepancies and interference from redundant motions, hindering end-to-end autonomous learning. This paper proposes the first end-to-end diffusion model framework for cross-morphological human-to-robot imitation: it directly maps a single RGB frame to a robot joint control sequence, without requiring predefined motion mappings or explicit pose estimation. We introduce the first human-robot mutual imitation video dataset, incorporate a joint configuration constraint mechanism to suppress spurious degrees of freedom, and enable joint perception-action modeling. In the RGB-to-joint-value generation task, our method significantly outperforms state-of-the-art approaches—achieving a 23.6% improvement in motion fidelity and a 31.4% reduction in cross-subject generalization error.

Technology Category

Application Category

📝 Abstract
There has been substantial progress in humanoid robots, with new skills continuously being taught, ranging from navigation to manipulation. While these abilities may seem impressive, the teaching methods often remain inefficient. To enhance the process of teaching robots, we propose leveraging a mechanism effectively used by humans: teaching by demonstrating. In this paper, we introduce DIRIGENt (DIrect Robotic Imitation GENeration model), a novel end-to-end diffusion approach that directly generates joint values from observing human demonstrations, enabling a robot to imitate these actions without any existing mapping between it and humans. We create a dataset in which humans imitate a robot and then use this collected data to train a diffusion model that enables a robot to imitate humans. The following three aspects are the core of our contribution. First is our novel dataset with natural pairs between human and robot poses, allowing our approach to imitate humans accurately despite the gap between their anatomies. Second, the diffusion input to our model alleviates the challenge of redundant joint configurations, limiting the search space. And finally, our end-to-end architecture from perception to action leads to an improved learning capability. Through our experimental analysis, we show that combining these three aspects allows DIRIGENt to outperform existing state-of-the-art approaches in the field of generating joint values from RGB images.
Problem

Research questions and friction points this paper is trying to address.

Robot Learning
Human Actions
Efficient Model
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion Models
End-to-End Learning
Redundant Actions Handling
🔎 Similar Papers
No similar papers found.
J
Josua Spisak
Knowledge Technology (WTM), University of Hamburg
Matthias Kerzel
Matthias Kerzel
Knowledge Technology Engineer, University of Hamburg
AIArtificial Neural NetworksNeuroroboticsDevelopmental Robotics
S
Stefan Wermter
Knowledge Technology (WTM), University of Hamburg