InternVLA-A1: Unifying Understanding, Generation and Action for Robotic Manipulation

📅 2026-01-05
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing vision-language-action (VLA) models in reasoning about physical dynamics and the fragility of video-prediction-based world models due to semantic ambiguity and error accumulation. The authors propose a unified Mixture-of-Experts Transformer architecture that, for the first time, integrates semantic understanding, visual future prediction, and action decision-making within a single framework, enabling synergistic interaction among these components through masked self-attention. Built upon InternVL3 and Qwen3-VL, the model scales to 2B/3B parameters and is pretrained on a hybrid dataset combining synthetic and real-world data from InternData-A1 and Agibot-World, effectively bridging the sim-to-real gap. Evaluated across twelve real-world and simulated tasks, it substantially outperforms pi0 and GR00T N1.5, achieving a 14.5% improvement on everyday tasks and 40%–73.3% gains in dynamic scenarios such as conveyor-belt sorting.

Technology Category

Application Category

📝 Abstract
Prevalent Vision-Language-Action (VLA) models are typically built upon Multimodal Large Language Models (MLLMs) and demonstrate exceptional proficiency in semantic understanding, but they inherently lack the capability to deduce physical world dynamics. Consequently, recent approaches have shifted toward World Models, typically formulated via video prediction; however, these methods often suffer from a lack of semantic grounding and exhibit brittleness when handling prediction errors. To synergize semantic understanding with dynamic predictive capabilities, we present InternVLA-A1. This model employs a unified Mixture-of-Transformers architecture, coordinating three experts for scene understanding, visual foresight generation, and action execution. These components interact seamlessly through a unified masked self-attention mechanism. Building upon InternVL3 and Qwen3-VL, we instantiate InternVLA-A1 at 2B and 3B parameter scales. We pre-train these models on hybrid synthetic-real datasets spanning InternData-A1 and Agibot-World, covering over 533M frames. This hybrid training strategy effectively harnesses the diversity of synthetic simulation data while minimizing the sim-to-real gap. We evaluated InternVLA-A1 across 12 real-world robotic tasks and simulation benchmark. It significantly outperforms leading models like pi0 and GR00T N1.5, achieving a 14.5\% improvement in daily tasks and a 40\%-73.3\% boost in dynamic settings, such as conveyor belt sorting.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language-Action
World Models
Semantic Understanding
Dynamic Prediction
Robotic Manipulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-Language-Action (VLA)
Mixture-of-Transformers
World Model
Visual Foresight Generation
Sim-to-Real Transfer
🔎 Similar Papers
No similar papers found.