🤖 AI Summary
This work proposes LingBot-VLA, a unified vision-language-action (VLA) foundation model trained on approximately 20,000 hours of real-world manipulation data spanning nine major dual-arm robotic platforms. By integrating visual, linguistic, and action modalities within a single architecture, LingBot-VLA achieves high performance and strong cross-platform generalization while significantly improving training and inference efficiency. Evaluated on three distinct robotic platforms across 100 tasks each (with 130 trials per task), the model substantially outperforms existing approaches. It attains a throughput of 261 samples per second per GPU under an 8-GPU setup, yielding a 1.5–2.8× speedup over current VLA codebases. The authors publicly release the code, models, and benchmark datasets to advance standardized evaluation and research on complex robotic tasks.
📝 Abstract
Offering great potential in robotic manipulation, a capable Vision-Language-Action (VLA) foundation model is expected to faithfully generalize across tasks and platforms while ensuring cost efficiency (e.g., data and GPU hours required for adaptation). To this end, we develop LingBot-VLA with around 20,000 hours of real-world data from 9 popular dual-arm robot configurations. Through a systematic assessment on 3 robotic platforms, each completing 100 tasks with 130 post-training episodes per task, our model achieves clear superiority over competitors, showcasing its strong performance and broad generalizability. We have also built an efficient codebase, which delivers a throughput of 261 samples per second per GPU with an 8-GPU training setup, representing a 1.5~2.8$\times$ (depending on the relied VLM base model) speedup over existing VLA-oriented codebases. The above features ensure that our model is well-suited for real-world deployment. To advance the field of robot learning, we provide open access to the code, base model, and benchmark data, with a focus on enabling more challenging tasks and promoting sound evaluation standards.