Object-centric 3D Motion Field for Robot Learning from Human Videos

📅 2025-06-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address action representation distortion and modeling complexity in zero-shot human video-to-robot policy transfer, this paper introduces an object-centric 3D motion field as the action representation. Methodologically: (1) we propose the first object-centric 3D motion field, explicitly modeling task-relevant rigid-body motion; (2) we design a noise-robust, depth-guided denoising training paradigm to enhance 3D motion estimation accuracy; and (3) we develop a dense prediction architecture supporting cross-morphology robot transfer and background generalization. Experiments show a >50% reduction in 3D motion estimation error; average success rate reaches 55% in multi-task zero-shot policy learning—substantially outperforming prior methods (<10%); and fine-grained manipulation tasks (e.g., plug insertion) are successfully executed. This work establishes a novel, interpretable, and generalizable cross-modal vision–action transfer paradigm.

Technology Category

Application Category

📝 Abstract
Learning robot control policies from human videos is a promising direction for scaling up robot learning. However, how to extract action knowledge (or action representations) from videos for policy learning remains a key challenge. Existing action representations such as video frames, pixelflow, and pointcloud flow have inherent limitations such as modeling complexity or loss of information. In this paper, we propose to use object-centric 3D motion field to represent actions for robot learning from human videos, and present a novel framework for extracting this representation from videos for zero-shot control. We introduce two novel components in its implementation. First, a novel training pipeline for training a ''denoising'' 3D motion field estimator to extract fine object 3D motions from human videos with noisy depth robustly. Second, a dense object-centric 3D motion field prediction architecture that favors both cross-embodiment transfer and policy generalization to background. We evaluate the system in real world setups. Experiments show that our method reduces 3D motion estimation error by over 50% compared to the latest method, achieve 55% average success rate in diverse tasks where prior approaches fail~($lesssim 10$%), and can even acquire fine-grained manipulation skills like insertion.
Problem

Research questions and friction points this paper is trying to address.

Extracting action knowledge from human videos for robot learning
Overcoming limitations of existing action representations like information loss
Enabling zero-shot control via object-centric 3D motion fields
Innovation

Methods, ideas, or system contributions that make the work stand out.

Object-centric 3D motion field representation
Denoising 3D motion field estimator
Dense cross-embodiment transfer architecture