SpatialVLA: Exploring Spatial Representations for Visual-Language-Action Model

📅 2025-01-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited spatial understanding capability of robots in cross-environment task execution, this paper proposes a vision-language-action (VLA) foundation model endowed with explicit 3D spatial perception. Methodologically, we introduce Ego3D positional encoding to inject egocentric 3D geometric observations into the model, and design Adaptive Action Grids to enable transferable and rescalable discretization of spatial actions. The model is pretrained on 1.1 million real-world robot interaction episodes, integrating a multimodal architecture with zero-shot transfer mechanisms. Experiments demonstrate strong zero-shot generalization across diverse tasks in both simulation and real-robot settings, significantly improving complex trajectory reasoning. Moreover, the model enables rapid adaptation to novel robotic platforms, exhibiting exceptional in-distribution generalization and out-of-distribution robustness.

Technology Category

Application Category

📝 Abstract
In this paper, we claim that spatial understanding is the keypoint in robot manipulation, and propose SpatialVLA to explore effective spatial representations for the robot foundation model. Specifically, we introduce Ego3D Position Encoding to inject 3D information into the input observations of the visual-language-action model, and propose Adaptive Action Grids to represent spatial robot movement actions with adaptive discretized action grids, facilitating learning generalizable and transferrable spatial action knowledge for cross-robot control. SpatialVLA is first pre-trained on top of a vision-language model with 1.1 Million real-world robot episodes, to learn a generalist manipulation policy across multiple robot environments and tasks. After pre-training, SpatialVLA is directly applied to perform numerous tasks in a zero-shot manner. The superior results in both simulation and real-world robots demonstrate its advantage of inferring complex robot motion trajectories and its strong in-domain multi-task generalization ability. We further show the proposed Adaptive Action Grids offer a new and effective way to fine-tune the pre-trained SpatialVLA model for new simulation and real-world setups, where the pre-learned action grids are re-discretized to capture robot-specific spatial action movements of new setups. The superior results from extensive evaluations demonstrate the exceptional in-distribution generalization and out-of-distribution adaptation capability, highlighting the crucial benefit of the proposed spatial-aware representations for generalist robot policy learning. All the details and codes will be open-sourced.
Problem

Research questions and friction points this paper is trying to address.

Spatial Concepts
Robot Actions
Task Performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

SpatialVLA
Ego3D Position Encoding
Adaptive Action Grid
🔎 Similar Papers
No similar papers found.
Delin Qu
Delin Qu
PhD Candidate of Fudan University
Embodied AI3D VisionMultimodal Generation
H
Haoming Song
Shanghai AI Laboratory
Qizhi Chen
Qizhi Chen
PhD Candidate of Zhejiang University
Multimodal ReasoningEmbodied AI3D Vision
Yuanqi Yao
Yuanqi Yao
INSAIT
RoboticsManipulation
Xinyi Ye
Xinyi Ye
Shanghai AI Laboratory
Y
Yan Ding
Shanghai AI Laboratory
Z
Zhigang Wang
Shanghai AI Laboratory
J
JiaYuan Gu
Shanghai AI Laboratory
B
Bin Zhao
Shanghai AI Laboratory
D
Dong Wang
Shanghai AI Laboratory
X
Xuelong Li
Shanghai AI Laboratory