CLAP: Contrastive Latent Action Pretraining for Learning Vision-Language-Action Models from Human Videos

📅 2026-01-07
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of current vision-language-action (VLA) models, which are hindered by the scarcity of robotic demonstration data and the susceptibility of human-video-based latent action approaches to visual distractions, impeding the extraction of executable skills. To overcome these challenges, we propose the Contrastive Latent Action Pretraining (CLAP) framework, which aligns the visual latent space of human videos with robot proprioceptive trajectories through contrastive learning and maps actions to an executable quantized codebook. We introduce a dual-branch VLA architecture—comprising CLAP-NTP and CLAP-RF—that integrates Rectified Flow with knowledge-matching regularization to effectively mitigate catastrophic forgetting during fine-tuning. Experiments demonstrate that our approach significantly outperforms baseline methods in skill transfer tasks, achieving superior instruction following, object generalization, and high-frequency precise manipulation capabilities.

Technology Category

Application Category

📝 Abstract
Generalist Vision-Language-Action models are currently hindered by the scarcity of robotic data compared to the abundance of human video demonstrations. Existing Latent Action Models attempt to leverage video data but often suffer from visual entanglement, capturing noise rather than manipulation skills. To address this, we propose Contrastive Latent Action Pretraining (CLAP), a framework that aligns the visual latent space from videos with a proprioceptive latent space from robot trajectories. By employing contrastive learning, CLAP maps video transitions onto a quantized, physically executable codebook. Building on this representation, we introduce a dual-formulation VLA framework offering both CLAP-NTP, an autoregressive model excelling at instruction following and object generalization, and CLAP-RF, a Rectified Flow-based policy designed for high-frequency, precise manipulation. Furthermore, we propose a Knowledge Matching (KM) regularization strategy to mitigate catastrophic forgetting during fine-tuning. Extensive experiments demonstrate that CLAP significantly outperforms strong baselines, enabling the effective transfer of skills from human videos to robotic execution. Project page: https://lin-shan.com/CLAP/.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language-Action models
human video demonstrations
latent action models
visual entanglement
robotic data scarcity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Contrastive Learning
Latent Action Models
Vision-Language-Action
Rectified Flow
Knowledge Matching
🔎 Similar Papers
No similar papers found.