JARVIS-VLA: Post-Training Large-Scale Vision Language Models to Play Visual Games with Keyboards and Mouse

📅 2025-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the weak generalization and insufficient fine-grained action modeling of Vision-Language-Action (VLA) models in open-world visual games (e.g., Minecraft), this paper proposes a non-trajectory-based, joint vision-language-action self-supervised post-training paradigm. Methodologically, it innovatively integrates spatially aware instruction alignment, tokenized keyboard/mouse action modeling, and large-scale Vision-Language Model (VLM)-driven task distillation—jointly enhancing world knowledge, visual recognition, and spatial localization capabilities. Our approach achieves zero-shot generalization across over 1,000 atomic tasks—the first such result in this domain—and improves performance by 40% over the strongest baseline on the Minecraft VLA benchmark, substantially outperforming conventional imitation learning and establishing new state-of-the-art performance. The code, models, and dataset are publicly released.

Technology Category

Application Category

📝 Abstract
Recently, action-based decision-making in open-world environments has gained significant attention. Visual Language Action (VLA) models, pretrained on large-scale web datasets, have shown promise in decision-making tasks. However, previous work has primarily focused on action post-training, often neglecting enhancements to the foundational model itself. In response, we introduce a novel approach, Act from Visual Language Post-Training, which refines Visual Language Models (VLMs) through visual and linguistic guidance in a self-supervised manner. This enhancement improves the models' capabilities in world knowledge, visual recognition, and spatial grounding in open-world environments. Following the above post-training paradigms, we obtain the first VLA models in Minecraft that can follow human instructions on over 1k different atomic tasks, including crafting, smelting, cooking, mining, and killing. Our experiments demonstrate that post-training on non-trajectory tasks leads to a significant 40% improvement over the best agent baseline on a diverse set of atomic tasks. Furthermore, we demonstrate that our approach surpasses traditional imitation learning-based policies in Minecraft, achieving state-of-the-art performance. We have open-sourced the code, models, and datasets to foster further research. The project page can be found in https://craftjarvis.github.io/JarvisVLA.
Problem

Research questions and friction points this paper is trying to address.

Enhance Visual Language Models for open-world decision-making.
Improve model capabilities in world knowledge and visual recognition.
Achieve state-of-the-art performance in Minecraft task execution.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised post-training for Visual Language Models
Enhances world knowledge and visual recognition
Achieves state-of-the-art in Minecraft tasks
🔎 Similar Papers
No similar papers found.