VLA^2: Empowering Vision-Language-Action Models with an Agentic Framework for Unseen Concept Manipulation

📅 2025-10-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language-action (VLA) models exhibit severely limited generalization to unseen object descriptions and textures not encountered during training. To address this, we propose VLA², a novel embodied agent framework that dynamically integrates web retrieval, open-vocabulary object detection, and contextual reasoning at execution time—enabling real-time acquisition and fusion of external multimodal knowledge to enhance semantic and visual understanding of novel objects. Built upon OpenVLA as the execution backbone, VLA² achieves cross-distribution manipulation policies in the LIBERO simulation environment. Evaluated on a newly constructed three-level generalization benchmark, VLA² improves success rates by 44.2% on hard tasks and 20.2% on average across all environments, without compromising in-domain performance. Our core contribution is the first integration of real-time, web-sourced knowledge into the closed-loop VLA control pipeline, substantially advancing open-world embodied generalization.

Technology Category

Application Category

📝 Abstract
Current vision-language-action (VLA) models, pre-trained on large-scale robotic data, exhibit strong multi-task capabilities and generalize well to variations in visual and language instructions for manipulation. However, their success rate drops significantly when faced with object concepts outside the training data, such as unseen object descriptions and textures in the dataset. To address this, we propose a novel agentic framework, VLA^2, which leverages OpenVLA as the execution backbone and effectively leverages external modules such as web retrieval and object detection to provide visual and textual knowledge about target objects to the VLA. This approach mitigates generalization failure when handling out-of-distribution objects. Based on the LIBERO simulation environment, we introduced novel objects and object descriptions to construct a new evaluation benchmark with three difficulty levels to test the effectiveness of our method. Our framework successfully outperformed the current state-of-the-art models on our designed hard-level generalization benchmark. Compared to the standalone OpenVLA baseline, VLA^2 achieves a 44.2% improvement in the success rate in the hard-level benchmark and an average improvement of 20.2% in all customized environments without any performance degradation on in-domain tasks. Project website: https://vla-2.github.io.
Problem

Research questions and friction points this paper is trying to address.

Addressing VLA models' failure with unseen object concepts
Improving generalization for out-of-distribution objects in manipulation
Enhancing success rates for unfamiliar object descriptions and textures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Agentic framework enhances VLA models with external modules
Integrates web retrieval and object detection for unseen objects
Improves success rate on out-of-distribution manipulation tasks
🔎 Similar Papers
No similar papers found.
H
Han Zhao
1Zhejiang University, China2MiLAB, Westlake University, China
J
Jiaxuan Zhang
2MiLAB, Westlake University, China3Southern University of Science and Technology, China
Wenxuan Song
Wenxuan Song
The Hong Kong University of Science and Technology (Guangzhou)
Vision-language-action ModelRobotics
Pengxiang Ding
Pengxiang Ding
Zhejiang University
Human Motion PredictionLarge Language ModelEmbodied AI
D
Donglin Wang
2MiLAB, Westlake University, China