LatBot: Distilling Universal Latent Actions for Vision-Language-Action Models

📅 2025-11-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language-action (VLA) models suffer from poor generalization due to the neglect of physical priors. To address this, we propose a novel framework that learns embodiment-agnostic, transferable latent action representations from large-scale unlabeled manipulation videos. Our core innovation lies in decoupling latent actions into learnable “motion tokens” and “scene tokens,” explicitly distinguishing robot-executed motion from environment dynamics. We jointly optimize three objectives: future frame reconstruction, multi-step action trajectory prediction, and multimodal sequence modeling—thereby embedding spatial and directional physical priors into the latent space. Leveraging VLA model distillation, our method achieves strong few-shot transfer with only 10 real-world trajectories per task. It demonstrates significant cross-task generalization improvements across five complex manipulation tasks in the SIMPLER and LIBERO simulation benchmarks, as well as on a physical Franka platform.

Technology Category

Application Category

📝 Abstract
Learning transferable latent actions from large-scale object manipulation videos can significantly enhance generalization in downstream robotics tasks, as such representations are agnostic to different robot embodiments. Existing approaches primarily rely on visual reconstruction objectives while neglecting physical priors, leading to sub-optimal performance in learning universal representations. To address these challenges, we propose a Universal Latent Action Learning framework that takes task instructions and multiple frames as inputs, and optimizes both future frame reconstruction and action sequence prediction. Unlike prior works, incorporating action predictions (e.g., gripper or hand trajectories and orientations) allows the model to capture richer physical priors such as real-world distances and orientations, thereby enabling seamless transferability to downstream tasks. We further decompose the latent actions into learnable motion and scene tokens to distinguish the robot's active movements from environmental changes, thus filtering out irrelevant dynamics. By distilling the learned latent actions into the latest VLA models, we achieve strong performance across both simulated (SIMPLER and LIBERO) and real-world robot settings. Notably, with only 10 real-world trajectories per task collected on a Franka robot, our approach successfully completes all five challenging tasks, demonstrating strong few-shot transferability in robotic manipulation.
Problem

Research questions and friction points this paper is trying to address.

Learning transferable latent actions from videos for robotics generalization
Addressing neglect of physical priors in existing visual reconstruction approaches
Enhancing few-shot transferability in robotic manipulation tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimizes future frame reconstruction and action sequence prediction
Decomposes latent actions into motion and scene tokens
Distills learned latent actions into vision-language-action models
🔎 Similar Papers
No similar papers found.
Z
Zuolei Li
Institute of Microelectronics, Chinese Academy of Sciences
Xingyu Gao
Xingyu Gao
Professor of Computer Science, Chinese Academy of Sciences
Machine LearningComputer VisionMultimediaUbiquitous Computing
X
Xiaofan Wang
Institute of Microelectronics, Chinese Academy of Sciences
Jianlong Fu
Jianlong Fu
Microsoft Research
Multimedia AnalysisComputer VisionRobot Learning