StemVLA:An Open-Source Vision-Language-Action Model with Future 3D Spatial Geometry Knowledge and 4D Historical Representation

πŸ“… 2026-02-27
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing vision-language-action models lack explicit modeling of 3D spatial structure and temporal dynamics, limiting robotic agents’ spatial reasoning and long-horizon decision-making in dynamic environments. This work proposes the first framework that jointly models future 3D geometry prediction and historical 4D spatiotemporal representations. By pretraining a video-geometry Transformer backbone (VideoFormer) equipped with temporal attention mechanisms, the model extracts and aggregates implicit 3D world representations from historical observations and integrates predicted future scene geometry to enhance environmental understanding. Evaluated on the CALVIN ABC-D benchmark, the approach significantly improves long-horizon task success rates, enabling more accurate spatial reasoning and action planning.

Technology Category

Application Category

πŸ“ Abstract
Vision-language-action (VLA) models integrate visual observations and language instructions to predict robot actions, demonstrating promising generalization in manipulation tasks. However, most existing approaches primarily rely on direct mappings from 2D visual inputs to action sequences, without explicitly modeling the underlying 3D spatial structure or temporal world dynamics. Such representations may limit spatial reasoning and long-horizon decision-making in dynamic environments. To address this limitation, we propose StemVLA, a novel framework that explicitly incorporates both future-oriented 3D spatial knowledge and historical 4D spatiotemporal representations into action prediction. First, instead of relying solely on observed images, StemVLA forecasts structured 3D future spatial-geometric world knowledge, enabling the model to anticipate upcoming scene geometry and object configurations. Second, to capture temporal consistency and motion dynamics, we feed historical image frames into a pretrained video-geometry transformer backbone to extract implicit 3D world representations, and further aggregate them across time using a temporal attention module, termed VideoFormer [20], forming a unified 4D historical spatiotemporal representation. By jointly modeling 2D observations, predicted 3D future structure, and aggregated 4D temporal dynamics, StemVLA enables more comprehensive world understanding for robot manipulation. Extensive experiments in simulation demonstrate that StemVLA significantly improves long-horizon task success and achieves state-of-the-art performance on the CALVIN ABC-D benchmark [46], achieving an average sequence length of XXX.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language-Action
3D spatial reasoning
temporal dynamics
robot manipulation
long-horizon decision-making
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-Language-Action
3D spatial geometry prediction
4D spatiotemporal representation
VideoFormer
robot manipulation
πŸ”Ž Similar Papers
No similar papers found.
J
Jiasong Xiao
Ricoh Software Research Center (Beijing) Co., Ltd.
Y
Yutao She
Peking University
K
Kai Li
Beijing YZH Engineering Technology Co., Ltd
Y
Yuyang Sha
Macao Polytechnic University
Ziang Cheng
Ziang Cheng
XR Vision Labs, Tencent
Computer Vision
Z
Ziang Tong
University of Science and Technology Beijing