🤖 AI Summary
Weak generalization to unseen environments and heavy reliance on large-scale vision-language model (VLM) fine-tuning data hinder language-conditioned robotic manipulation. To address this, we propose a two-stage few-shot learning framework: first decoupling grasp and place subtasks, then introducing an instance-level semantic fusion module that enables fine-grained alignment between textual instructions and image instance features. Integrated with target localization and region classification mechanisms, the framework achieves lightweight adaptation from only a few demonstrations. Evaluated on both simulation and real-world robotic arm platforms, our method significantly improves cross-environment generalization and zero-shot transfer capability, enabling high-precision language-driven manipulation in unseen scenes without large-scale retraining. This work establishes an efficient, scalable paradigm for empowering embodied intelligence with VLMs.
📝 Abstract
The control of robots for manipulation tasks generally relies on visual input. Recent advances in vision-language models (VLMs) enable the use of natural language instructions to condition visual input and control robots in a wider range of environments. However, existing methods require a large amount of data to fine-tune VLMs for operating in unseen environments. In this paper, we present a framework that learns object-arrangement tasks from just a few demonstrations. We propose a two-stage framework that divides object-arrangement tasks into a target localization stage, for picking the object, and a region determination stage for placing the object. We present an instance-level semantic fusion module that aligns the instance-level image crops with the text embedding, enabling the model to identify the target objects defined by the natural language instructions. We validate our method on both simulation and real-world robotic environments. Our method, fine-tuned with a few demonstrations, improves generalization capability and demonstrates zero-shot ability in real-robot manipulation scenarios.