A Pragmatic VLA Foundation Model

📅 2026-01-26
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes LingBot-VLA, a unified vision-language-action (VLA) foundation model trained on approximately 20,000 hours of real-world manipulation data spanning nine major dual-arm robotic platforms. By integrating visual, linguistic, and action modalities within a single architecture, LingBot-VLA achieves high performance and strong cross-platform generalization while significantly improving training and inference efficiency. Evaluated on three distinct robotic platforms across 100 tasks each (with 130 trials per task), the model substantially outperforms existing approaches. It attains a throughput of 261 samples per second per GPU under an 8-GPU setup, yielding a 1.5–2.8× speedup over current VLA codebases. The authors publicly release the code, models, and benchmark datasets to advance standardized evaluation and research on complex robotic tasks.

Technology Category

Application Category

📝 Abstract
Offering great potential in robotic manipulation, a capable Vision-Language-Action (VLA) foundation model is expected to faithfully generalize across tasks and platforms while ensuring cost efficiency (e.g., data and GPU hours required for adaptation). To this end, we develop LingBot-VLA with around 20,000 hours of real-world data from 9 popular dual-arm robot configurations. Through a systematic assessment on 3 robotic platforms, each completing 100 tasks with 130 post-training episodes per task, our model achieves clear superiority over competitors, showcasing its strong performance and broad generalizability. We have also built an efficient codebase, which delivers a throughput of 261 samples per second per GPU with an 8-GPU training setup, representing a 1.5~2.8$\times$ (depending on the relied VLM base model) speedup over existing VLA-oriented codebases. The above features ensure that our model is well-suited for real-world deployment. To advance the field of robot learning, we provide open access to the code, base model, and benchmark data, with a focus on enabling more challenging tasks and promoting sound evaluation standards.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language-Action
robotic manipulation
foundation model
generalization
cost efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-Language-Action (VLA)
real-world robot data
cross-platform generalization
efficient training codebase
open-source foundation model
🔎 Similar Papers
No similar papers found.
W
Wei Wu
F
Fan Lu
Yunnan Wang
Yunnan Wang
Department of Computer Science and Engineering, Shanghai Jiao Tong University
Computer VisionMultimodal Representation Learning
S
Shuai Yang
S
Shi Liu
F
Fangjing Wang
Q
Qian Zhu
H
He Sun
Y
Yong Wang
Shuailei Ma
Shuailei Ma
Northeast University China
Open-World Object DetectionHuman Object Interaction Detection
Y
Yiyu Ren
K
Kejia Zhang
H
Hui Yu
J
Jingmei Zhao
S
Shuai Zhou
Z
Zhenqi Qiu
H
Houlong Xiong
Z
Ziyu Wang
Z
Zechen Wang
R
Ran Cheng
Yong-Lu Li
Yong-Lu Li
Associate Professor, Shanghai Jiao Tong University/Shanghai Innovation Institute
Physical ReasoningRoboticsComputer VisionMachine LearningEmbodied AI
Y
Yong-Zhi Huang
X
Xing Zhu
Yujun Shen
Yujun Shen
Ant Group
Generative ModelingComputer VisionDeep Learning
K
Kecheng Zheng