ConLA: Contrastive Latent Action Learning from Human Videos for Robotic Manipulation

📅 2026-01-31
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of learning transferable robotic manipulation policies from human demonstration videos without explicit action labels, while avoiding shortcut learning and representation entanglement caused by reconstructing visual appearance. The authors propose an unsupervised pretraining framework that leverages a contrastive disentanglement mechanism, integrating action category priors with temporal dynamics to effectively separate motion semantics from visual content. This approach yields clean, semantically consistent latent action representations. Notably, it is the first method to surpass the performance of models pretrained on real robot trajectories when using only human videos for pretraining. Extensive experiments demonstrate its strong generalization and practical utility across multiple robotic manipulation benchmarks, significantly enhancing both the disentanglement and transferability of learned action representations.

Technology Category

Application Category

📝 Abstract
Vision-Language-Action (VLA) models achieve preliminary generalization through pretraining on large scale robot teleoperation datasets. However, acquiring datasets that comprehensively cover diverse tasks and environments is extremely costly and difficult to scale. In contrast, human demonstration videos offer a rich and scalable source of diverse scenes and manipulation behaviors, yet their lack of explicit action supervision hinders direct utilization. Prior work leverages VQ-VAE based frameworks to learn latent actions from human videos in an unsupervised manner. Nevertheless, since the training objective primarily focuses on reconstructing visual appearances rather than capturing inter-frame dynamics, the learned representations tend to rely on spurious visual cues, leading to shortcut learning and entangled latent representations that hinder transferability. To address this, we propose ConLA, an unsupervised pretraining framework for learning robotic policies from human videos. ConLA introduces a contrastive disentanglement mechanism that leverages action category priors and temporal cues to isolate motion dynamics from visual content, effectively mitigating shortcut learning. Extensive experiments show that ConLA achieves strong performance across diverse benchmarks. Notably, by pretraining solely on human videos, our method for the first time surpasses the performance obtained with real robot trajectory pretraining, highlighting its ability to extract pure and semantically consistent latent action representations for scalable robot learning.
Problem

Research questions and friction points this paper is trying to address.

latent action learning
human demonstration videos
shortcut learning
representation disentanglement
robotic manipulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

contrastive disentanglement
latent action learning
human video pretraining
robotic manipulation
unsupervised representation learning
🔎 Similar Papers
No similar papers found.