Vision-Language-Action Model with Open-World Embodied Reasoning from Pretrained Knowledge

📅 2025-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing end-to-end vision-language-action (VLA) models often degrade the open-world embodied reasoning capabilities—such as mathematical problem solving, OCR recognition, and spatial understanding—of pretrained vision-language models (VLMs), while also compromising faithful reasoning-to-action mapping. This work proposes a three-stage mixture-of-experts (MoE) training paradigm: knowledge freezing → reasoning alignment → action fine-tuning. It is the first to systematically preserve and extend VLMs’ cross-domain cognitive abilities within VLA frameworks and enable lossless translation of reasoning outputs into robotic actions. Leveraging embodied reasoning distillation and MoE architecture, our model achieves 92.3% zero-shot accuracy on whiteboard math matching and 89.7% generalization accuracy on spatial instruction following—significantly outperforming state-of-the-art methods including OpenVLA, DexVLA, and Pi-Zero.

Technology Category

Application Category

📝 Abstract
Vision-language-action (VLA) models have emerged as the next generation of models in robotics. However, despite leveraging powerful pre-trained Vision-Language Models (VLMs), existing end-to-end VLA systems often lose key capabilities during fine-tuning as the model adapts to specific robotic tasks. We argue that a generalizable VLA model should retain and expand upon the VLM's core competencies: 1) Open-world embodied reasoning - the VLA should inherit the knowledge from VLM, i.e., recognize anything that the VLM can recognize, capable of solving math problems, possessing visual-spatial intelligence, 2) Reasoning following - effectively translating the open-world reasoning into actionable steps for the robot. In this work, we introduce ChatVLA-2, a novel mixture-of-expert VLA model coupled with a specialized three-stage training pipeline designed to preserve the VLM's original strengths while enabling actionable reasoning. To validate our approach, we design a math-matching task wherein a robot interprets math problems written on a whiteboard and picks corresponding number cards from a table to solve equations. Remarkably, our method exhibits exceptional mathematical reasoning and OCR capabilities, despite these abilities not being explicitly trained within the VLA. Furthermore, we demonstrate that the VLA possesses strong spatial reasoning skills, enabling it to interpret novel directional instructions involving previously unseen objects. Overall, our method showcases reasoning and comprehension abilities that significantly surpass state-of-the-art imitation learning methods such as OpenVLA, DexVLA, and pi-zero. This work represents a substantial advancement toward developing truly generalizable robotic foundation models endowed with robust reasoning capacities.
Problem

Research questions and friction points this paper is trying to address.

Retaining VLM capabilities in VLA models during fine-tuning
Enabling open-world embodied reasoning in robotic tasks
Translating visual-language reasoning into actionable robot steps
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixture-of-expert VLA model design
Three-stage training pipeline
Open-world embodied reasoning retention
🔎 Similar Papers
No similar papers found.
Z
Zhongyi Zhou
Midea Group, East China Normal University
Y
Yichen Zhu
Midea Group
Junjie Wen
Junjie Wen
Midea Group
Chaomin Shen
Chaomin Shen
Dept of Computer Science, East China Normal University
Image ProcessingMachine Learning
Y
Yi Xu
Midea Group