🤖 AI Summary
Discrete latent action representations in internet-video-driven general-purpose robot learning suffer from information loss and fail to capture fine-grained dynamics.
Method: We propose a self-supervised framework for learning continuous implicit motion representations without action annotations. It incorporates early temporal feature differencing to prevent representational collapse, enforces an information bottleneck to compress and disentangle the latent space, and introduces two novel evaluation metrics—linear probe MSE and motion embedding cosine similarity—to assess representation quality. The framework enables zero-shot cross-domain generation of continuous pseudo-actions.
Results: Integrating our representation into diffusion- and autoregressive-based policies significantly improves performance on both simulated and real-world robotic tasks. Moreover, it supports zero-shot generalization to unseen, unlabeled data sources—including humanoid demonstrations and videos of morphologically distinct robots—demonstrating strong transferability and scalability.
📝 Abstract
Learning latent motion from Internet videos is crucial for building generalist robots. However, existing discrete latent action methods suffer from information loss and struggle with complex and fine-grained dynamics. We propose CoMo, which aims to learn more informative continuous motion representations from diverse, internet-scale videos. CoMo employs a early temporal feature difference mechanism to prevent model collapse and suppress static appearance noise, effectively discouraging shortcut learning problem. Furthermore, guided by the information bottleneck principle, we constrain the latent motion embedding dimensionality to achieve a better balance between retaining sufficient action-relevant information and minimizing the inclusion of action-irrelevant appearance noise. Additionally, we also introduce two new metrics for more robustly and affordably evaluating motion and guiding motion learning methods development: (i) the linear probing MSE of action prediction, and (ii) the cosine similarity between past-to-current and future-to-current motion embeddings. Critically, CoMo exhibits strong zero-shot generalization, enabling it to generate continuous pseudo actions for previously unseen video domains. This capability facilitates unified policy joint learning using pseudo actions derived from various action-less video datasets (such as cross-embodiment videos and, notably, human demonstration videos), potentially augmented with limited labeled robot data. Extensive experiments show that policies co-trained with CoMo pseudo actions achieve superior performance with both diffusion and autoregressive architectures in simulated and real-world settings.