Joint-Aligned Latent Action: Towards Scalable VLA Pretraining in the Wild

📅 2026-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that existing Vision-Language-Action (VLA) models struggle to leverage the vast but noisy in-the-wild human demonstration videos due to the scarcity of high-quality robotic data. To overcome this, we propose JALA, a novel pretraining framework that learns action-centric, transition-aware latent representations aligned with inverse dynamics and ground-truth actions, bypassing the need for full visual dynamics reconstruction. JALA introduces a jointly aligned latent action space that effectively integrates both laboratory-collected and in-the-wild data from the UniHand-Mix corpus—a multi-source video dataset comprising 7.5 million clips and over 2,000 hours of footage—thereby breaking the trade-off between data scale and label fidelity. Experiments demonstrate that JALA significantly enhances robotic manipulation performance in both simulation and real-world tasks, while generating more realistic hand motions, validating its effectiveness across controlled and unconstrained scenarios.

Technology Category

Application Category

📝 Abstract
Despite progress, Vision-Language-Action models (VLAs) are limited by a scarcity of large-scale, diverse robot data. While human manipulation videos offer a rich alternative, existing methods are forced to choose between small, precisely-labeled datasets and vast in-the-wild footage with unreliable hand tracking labels. We present JALA, a pretraining framework that learns Jointly-Aligned Latent Actions. JALA bypasses full visual dynamic reconstruction, instead learns a predictive action embedding aligned with both inverse dynamics and real actions. This yields a transition-aware, behavior-centric latent space for learning from heterogeneous human data. We scale this approach with UniHand-Mix, a 7.5M video corpus (>2,000 hours) blending laboratory and in-the-wild footage. Experiments demonstrate that JALA generates more realistic hand motions in both controlled and unconstrained scenarios, significantly improving downstream robot manipulation performance in both simulation and real-world tasks. These results indicate that jointly-aligned latent actions offer a scalable pathway for VLA pretraining from human data.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language-Action models
robot data scarcity
human manipulation videos
hand tracking labels
VLA pretraining
Innovation

Methods, ideas, or system contributions that make the work stand out.

Joint-Aligned Latent Action
Vision-Language-Action
Inverse Dynamics Alignment
Scalable Pretraining
Human Demonstration Learning
🔎 Similar Papers
No similar papers found.