🤖 AI Summary
This work addresses the problem of directly generating robot-executable videos from human demonstration videos, bypassing conventional intermediate representations (e.g., keypoints or trajectories) to avoid information loss and error accumulation. We propose an end-to-end vision-temporal cross-modal mapping framework: (1) a novel diffusion Transformer architecture enabling video-contextual learning; (2) conditional token compression and bidirectional attention fusion for efficient temporal alignment; and (3) a fully automated human-robot video-pair synthesis pipeline to mitigate scarcity of real paired data. Our method builds upon a pre-trained video diffusion model and requires neither action labels nor explicit motion modeling. Evaluated on Human2Robot and EPIC-Kitchens, it achieves state-of-the-art performance, significantly improving visual fidelity, temporal consistency, and robot executability, while demonstrating strong cross-environment generalization.
📝 Abstract
Learning directly from human demonstration videos is a key milestone toward scalable and generalizable robot learning. Yet existing methods rely on intermediate representations such as keypoints or trajectories, introducing information loss and cumulative errors that harm temporal and visual consistency. We present Mitty, a Diffusion Transformer that enables video In-Context Learning for end-to-end Human2Robot video generation. Built on a pretrained video diffusion model, Mitty leverages strong visual-temporal priors to translate human demonstrations into robot-execution videos without action labels or intermediate abstractions. Demonstration videos are compressed into condition tokens and fused with robot denoising tokens through bidirectional attention during diffusion. To mitigate paired-data scarcity, we also develop an automatic synthesis pipeline that produces high-quality human-robot pairs from large egocentric datasets. Experiments on Human2Robot and EPIC-Kitchens show that Mitty delivers state-of-the-art results, strong generalization to unseen environments, and new insights for scalable robot learning from human observations.