ObjectVLA: End-to-End Open-World Object Manipulation Without Demonstration

📅 2025-02-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Robot imitation learning suffers from poor generalization to novel objects and heavy reliance on extensive human demonstrations. This paper proposes an end-to-end open-world object manipulation framework that enables cross-object skill transfer without requiring new demonstrations for unseen objects. Methodologically, it introduces (1) a vision-language-action (VLA) model incorporating an implicit object–action association mechanism to decouple semantic action commands from object-specific visual representations, and (2) a lightweight fine-tuning paradigm based on smartphone-captured images, enabling efficient domain adaptation with only a few real-world images. Evaluated on a physical robot platform, the approach achieves a 64% grasping success rate on 100 previously unseen objects—substantially reducing dependence on manual demonstrations. By bridging semantic instruction grounding with embodied perception in unconstrained environments, this work establishes a scalable paradigm for open-world embodied intelligence.

Technology Category

Application Category

📝 Abstract
Imitation learning has proven to be highly effective in teaching robots dexterous manipulation skills. However, it typically relies on large amounts of human demonstration data, which limits its scalability and applicability in dynamic, real-world environments. One key challenge in this context is object generalization, where a robot trained to perform a task with one object, such as"hand over the apple,"struggles to transfer its skills to a semantically similar but visually different object, such as"hand over the peach."This gap in generalization to new objects beyond those in the same category has yet to be adequately addressed in previous work on end-to-end visuomotor policy learning. In this paper, we present a simple yet effective approach for achieving object generalization through Vision-Language-Action (VLA) models, referred to as extbf{ObjectVLA}. Our model enables robots to generalize learned skills to novel objects without requiring explicit human demonstrations for each new target object. By leveraging vision-language pair data, our method provides a lightweight and scalable way to inject knowledge about the target object, establishing an implicit link between the object and the desired action. We evaluate ObjectVLA on a real robotic platform, demonstrating its ability to generalize across 100 novel objects with a 64% success rate in selecting objects not seen during training. Furthermore, we propose a more accessible method for enhancing object generalization in VLA models, using a smartphone to capture a few images and fine-tune the pre-trained model. These results highlight the effectiveness of our approach in enabling object-level generalization and reducing the need for extensive human demonstrations, paving the way for more flexible and scalable robotic learning systems.
Problem

Research questions and friction points this paper is trying to address.

Generalization to new objects in robotics
Reducing reliance on human demonstrations
Enhancing object manipulation via Vision-Language-Action models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-Language-Action models
Generalization without human demonstrations
Smartphone-enhanced model fine-tuning
🔎 Similar Papers
No similar papers found.