Mitty: Diffusion-based Human-to-Robot Video Generation

📅 2025-12-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the problem of directly generating robot-executable videos from human demonstration videos, bypassing conventional intermediate representations (e.g., keypoints or trajectories) to avoid information loss and error accumulation. We propose an end-to-end vision-temporal cross-modal mapping framework: (1) a novel diffusion Transformer architecture enabling video-contextual learning; (2) conditional token compression and bidirectional attention fusion for efficient temporal alignment; and (3) a fully automated human-robot video-pair synthesis pipeline to mitigate scarcity of real paired data. Our method builds upon a pre-trained video diffusion model and requires neither action labels nor explicit motion modeling. Evaluated on Human2Robot and EPIC-Kitchens, it achieves state-of-the-art performance, significantly improving visual fidelity, temporal consistency, and robot executability, while demonstrating strong cross-environment generalization.

Technology Category

Application Category

📝 Abstract
Learning directly from human demonstration videos is a key milestone toward scalable and generalizable robot learning. Yet existing methods rely on intermediate representations such as keypoints or trajectories, introducing information loss and cumulative errors that harm temporal and visual consistency. We present Mitty, a Diffusion Transformer that enables video In-Context Learning for end-to-end Human2Robot video generation. Built on a pretrained video diffusion model, Mitty leverages strong visual-temporal priors to translate human demonstrations into robot-execution videos without action labels or intermediate abstractions. Demonstration videos are compressed into condition tokens and fused with robot denoising tokens through bidirectional attention during diffusion. To mitigate paired-data scarcity, we also develop an automatic synthesis pipeline that produces high-quality human-robot pairs from large egocentric datasets. Experiments on Human2Robot and EPIC-Kitchens show that Mitty delivers state-of-the-art results, strong generalization to unseen environments, and new insights for scalable robot learning from human observations.
Problem

Research questions and friction points this paper is trying to address.

Generates robot videos from human demonstrations without intermediate representations
Translates human actions to robot execution using diffusion-based video generation
Addresses data scarcity by synthesizing human-robot video pairs automatically
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion Transformer for Human2Robot video generation
Bidirectional attention fuses human tokens with robot tokens
Automatic synthesis pipeline creates human-robot pairs from datasets
🔎 Similar Papers
No similar papers found.