🤖 AI Summary
This work addresses the action execution mismatch arising from morphological discrepancies between human video demonstrations and robotic platforms. We propose a diffusion-based cross-morphology imitation learning framework. Methodologically, we introduce a classifier-guided noise injection mechanism that dynamically assesses the feasibility of human actions and hierarchically models human and robot action distributions during the forward diffusion process; hierarchical noise scheduling coupled with policy distillation jointly optimizes high-level task intent preservation and low-level physical realizability. Our key contribution is the first integration of classifier-guided controllable diffusion into cross-morphology policy learning, enabling safe, layered utilization of human demonstration data. Evaluated on five robotic manipulation tasks, our approach achieves an average success rate 16% higher than the best baseline, demonstrating both effectiveness and robustness.
📝 Abstract
Human videos can be recorded quickly and at scale, making them an appealing source of training data for robot learning. However, humans and robots differ fundamentally in embodiment, resulting in mismatched action execution. Direct kinematic retargeting of human hand motion can therefore produce actions that are physically infeasible for robots. Despite these low-level differences, human demonstrations provide valuable motion cues about how to manipulate and interact with objects. Our key idea is to exploit the forward diffusion process: as noise is added to actions, low-level execution differences fade while high-level task guidance is preserved. We present X-Diffusion, a principled framework for training diffusion policies that maximally leverages human data without learning dynamically infeasible motions. X-Diffusion first trains a classifier to predict whether a noisy action is executed by a human or robot. Then, a human action is incorporated into policy training only after adding sufficient noise such that the classifier cannot discern its embodiment. Actions consistent with robot execution supervise fine-grained denoising at low noise levels, while mismatched human actions provide only coarse guidance at higher noise levels. Our experiments show that naive co-training under execution mismatches degrades policy performance, while X-Diffusion consistently improves it. Across five manipulation tasks, X-Diffusion achieves a 16% higher average success rate than the best baseline. The project website is available at https://portal-cornell.github.io/X-Diffusion/.