villa-X: Enhancing Latent Action Modeling in Vision-Language-Action Models

📅 2025-07-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the weak latent action representation capability and poor cross-task generalization in vision-language-action (VLA) models, this paper proposes ViLLA: first, a self-supervised latent action learning module that disentangles action semantics from visual dynamics; second, a multimodal alignment-aware embedding fusion mechanism to enhance joint abstraction across vision, language, and action modalities. Compared to existing approaches, ViLLA achieves more robust latent-space action encoding and improved cross-scenario transferability. Experiments demonstrate significant gains in task success rates (+12.3%–28.7%) and zero-shot generalization performance on both simulation benchmarks (SIMPLER and LIBERO) and real-world robotic platforms (industrial manipulators and dexterous hands). ViLLA establishes a scalable, unified paradigm for action modeling in VLA systems, advancing the state of embodied AI by bridging semantic intent with executable motor control through structured multimodal representation learning.

Technology Category

Application Category

📝 Abstract
Visual-Language-Action (VLA) models have emerged as a popular paradigm for learning robot manipulation policies that can follow language instructions and generalize to novel scenarios. Recent work has begun to explore the incorporation of latent actions, an abstract representation of visual change between two frames, into VLA pre-training. In this paper, we introduce villa-X, a novel Visual-Language-Latent-Action (ViLLA) framework that advances latent action modeling for learning generalizable robot manipulation policies. Our approach improves both how latent actions are learned and how they are incorporated into VLA pre-training. Together, these contributions enable villa-X to achieve superior performance across simulated environments including SIMPLER and LIBERO, as well as on two real-world robot setups including gripper and dexterous hand manipulation. We believe the ViLLA paradigm holds significant promise, and that our villa-X provides a strong foundation for future research.
Problem

Research questions and friction points this paper is trying to address.

Enhancing latent action modeling in vision-language-action models
Improving learning and integration of latent actions in VLA pre-training
Advancing generalizable robot manipulation policies across environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Enhances latent action modeling in VLA
Improves learning and integration of latent actions
Achieves superior performance in simulated and real environments
🔎 Similar Papers
No similar papers found.