🤖 AI Summary
This work addresses the fundamental challenge of achieving generalization and scalability in instruction-following robots. We propose a spatially grounded vision-language-action unified framework, innovatively leveraging spatial grounding as the central nexus for both instruction understanding and action generation—enabling the first joint modeling of “where to act” (spatial localization) and “how to act” (action policy), and supporting plug-and-play control across heterogeneous robot morphologies. Our method employs a two-stage training paradigm: (1) spatial reasoning pretraining on 2.3 million samples, and (2) embodied action post-training with spatial prompting. To support learning, we develop a simulation engine to generate 244K pick-and-place task instances. Evaluated across multiple benchmarks, our approach achieves an average performance gain of 6.2%, improves zero-shot generalization to unseen objects and novel scenes by 20.6%, and outperforms prior methods by over 10% on long-horizon tasks.
📝 Abstract
We introduce InternVLA-M1, a unified framework for spatial grounding and robot control that advances instruction-following robots toward scalable, general-purpose intelligence. Its core idea is spatially guided vision-language-action training, where spatial grounding serves as the critical link between instructions and robot actions. InternVLA-M1 employs a two-stage pipeline: (i) spatial grounding pre-training on over 2.3M spatial reasoning data to determine ``where to act''by aligning instructions with visual, embodiment-agnostic positions, and (ii) spatially guided action post-training to decide ``how to act''by generating embodiment-aware actions through plug-and-play spatial prompting. This spatially guided training recipe yields consistent gains: InternVLA-M1 outperforms its variant without spatial guidance by +14.6% on SimplerEnv Google Robot, +17% on WidowX, and +4.3% on LIBERO Franka, while demonstrating stronger spatial reasoning capability in box, point, and trace prediction. To further scale instruction following, we built a simulation engine to collect 244K generalizable pick-and-place episodes, enabling a 6.2% average improvement across 200 tasks and 3K+ objects. In real-world clustered pick-and-place, InternVLA-M1 improved by 7.3%, and with synthetic co-training, achieved +20.6% on unseen objects and novel configurations. Moreover, in long-horizon reasoning-intensive scenarios, it surpassed existing works by over 10%. These results highlight spatially guided training as a unifying principle for scalable and resilient generalist robots. Code and models are available at https://github.com/InternRobotics/InternVLA-M1.