VLANeXt: Recipes for Building Strong VLA Models

📅 2026-02-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of standardized training and evaluation protocols in current vision–language–action (VLA) modeling, which hinders the identification of effective design components. Within a unified framework, the study systematically analyzes key architectural choices across three dimensions: backbone architecture, perception modules, and action modeling. It proposes twelve reproducible, modular design principles that collectively form a standardized “recipe” for VLA systems. Leveraging a consistent evaluation protocol and RT-2/OpenVLA-style baselines, the authors validate these principles through comprehensive ablation studies. The resulting model, VLANeXt, achieves state-of-the-art performance on the LIBERO and LIBERO-plus benchmarks and demonstrates strong generalization capabilities in real-world robotic tasks.

Technology Category

Application Category

📝 Abstract
Following the rise of large foundation models, Vision-Language-Action models (VLAs) emerged, leveraging strong visual and language understanding for general-purpose policy learning. Yet, the current VLA landscape remains fragmented and exploratory. Although many groups have proposed their own VLA models, inconsistencies in training protocols and evaluation settings make it difficult to identify which design choices truly matter. To bring structure to this evolving space, we reexamine the VLA design space under a unified framework and evaluation setup. Starting from a simple VLA baseline similar to RT-2 and OpenVLA, we systematically dissect design choices along three dimensions: foundational components, perception essentials, and action modelling perspectives. From this study, we distill 12 key findings that together form a practical recipe for building strong VLA models. The outcome of this exploration is a simple yet effective model, VLANeXt. VLANeXt outperforms prior state-of-the-art methods on the LIBERO and LIBERO-plus benchmarks and demonstrates strong generalization in real-world experiments. We will release a unified, easy-to-use codebase that serves as a common platform for the community to reproduce our findings, explore the design space, and build new VLA variants on top of a shared foundation.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language-Action models
design choices
evaluation benchmarks
training protocols
model generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-Language-Action
systematic design analysis
unified evaluation framework
VLANeXt
foundation model recipes