CoMo: Learning Continuous Latent Motion from Internet Videos for Scalable Robot Learning

📅 2025-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Discrete latent action representations in internet-video-driven general-purpose robot learning suffer from information loss and fail to capture fine-grained dynamics. Method: We propose a self-supervised framework for learning continuous implicit motion representations without action annotations. It incorporates early temporal feature differencing to prevent representational collapse, enforces an information bottleneck to compress and disentangle the latent space, and introduces two novel evaluation metrics—linear probe MSE and motion embedding cosine similarity—to assess representation quality. The framework enables zero-shot cross-domain generation of continuous pseudo-actions. Results: Integrating our representation into diffusion- and autoregressive-based policies significantly improves performance on both simulated and real-world robotic tasks. Moreover, it supports zero-shot generalization to unseen, unlabeled data sources—including humanoid demonstrations and videos of morphologically distinct robots—demonstrating strong transferability and scalability.

Technology Category

Application Category

📝 Abstract
Learning latent motion from Internet videos is crucial for building generalist robots. However, existing discrete latent action methods suffer from information loss and struggle with complex and fine-grained dynamics. We propose CoMo, which aims to learn more informative continuous motion representations from diverse, internet-scale videos. CoMo employs a early temporal feature difference mechanism to prevent model collapse and suppress static appearance noise, effectively discouraging shortcut learning problem. Furthermore, guided by the information bottleneck principle, we constrain the latent motion embedding dimensionality to achieve a better balance between retaining sufficient action-relevant information and minimizing the inclusion of action-irrelevant appearance noise. Additionally, we also introduce two new metrics for more robustly and affordably evaluating motion and guiding motion learning methods development: (i) the linear probing MSE of action prediction, and (ii) the cosine similarity between past-to-current and future-to-current motion embeddings. Critically, CoMo exhibits strong zero-shot generalization, enabling it to generate continuous pseudo actions for previously unseen video domains. This capability facilitates unified policy joint learning using pseudo actions derived from various action-less video datasets (such as cross-embodiment videos and, notably, human demonstration videos), potentially augmented with limited labeled robot data. Extensive experiments show that policies co-trained with CoMo pseudo actions achieve superior performance with both diffusion and autoregressive architectures in simulated and real-world settings.
Problem

Research questions and friction points this paper is trying to address.

Learning continuous motion from diverse internet videos
Overcoming information loss in discrete latent action methods
Enabling zero-shot generalization for unseen video domains
Innovation

Methods, ideas, or system contributions that make the work stand out.

Continuous latent motion learning from videos
Early temporal feature difference mechanism
Information bottleneck for motion embedding
🔎 Similar Papers
No similar papers found.
Jiange Yang
Jiange Yang
Nanjing University
Deep LearningComputer VisionRoboticsEmbodied AI
Y
Yansong Shi
Shanghai Artificial Intelligence Laboratory, University of Science and Technology of China
Haoyi Zhu
Haoyi Zhu
Shanghai AI Lab | USTC | SJTU
World ModelSpatial IntelligenceRobot LearningEmbodied AI
Mingyu Liu
Mingyu Liu
Technical University of Munich
Computer VisionDeep Learning
Kaijing Ma
Kaijing Ma
Fudan University
Computer VisionMachine Learning
Y
Yating Wang
Shanghai Artificial Intelligence Laboratory, Tongji University
G
Gangshan Wu
Nanjing University
T
Tong He
Shanghai Artificial Intelligence Laboratory
L
Limin Wang
Nanjing University, Shanghai Artificial Intelligence Laboratory