Attribute-Based Robotic Grasping With Data-Efficient Adaptation

📅 2025-01-04
🏛️ IEEE Transactions on robotics
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
Rapid robotic grasping of unknown objects in cluttered scenes remains challenging—particularly under data-scarce conditions where labeled grasp annotations are limited. Method: This paper proposes an attribute-driven, end-to-end vision-language grasping framework. It introduces a novel object-attribute-guided cross-modal embedding learning mechanism, integrating gated attention fusion, self-supervised contrastive learning, and adversarial domain adaptation. Additionally, it designs two data-efficient transfer strategies: single-grasp adaptation and adversarial adaptive adaptation. Results: The method achieves over 81% instance-level grasping success on unseen objects in both simulation and real-world robotic platforms, requiring only a single demonstration grasp or unlabeled images for adaptation to new environments. It significantly outperforms state-of-the-art few-shot and transfer learning baselines, demonstrating robust generalization with minimal supervision.

Technology Category

Application Category

📝 Abstract
Robotic grasping is one of the most fundamental robotic manipulation tasks and has been the subject of extensive research. However, swiftly teaching a robot to grasp a novel target object in clutter remains challenging. This article attempts to address the challenge by leveraging object attributes that facilitate recognition, grasping, and rapid adaptation to new domains. In this work, we present an end-to-end encoder–decoder network to learn attribute-based robotic grasping with data-efficient adaptation capability. We first pretrain the end-to-end model with a variety of basic objects to learn generic attribute representation for recognition and grasping. Our approach fuses the embeddings of a workspace image and a query text using a gated-attention mechanism and learns to predict instance grasping affordances. To train the joint embedding space of visual and textual attributes, the robot utilizes object persistence before and after grasping. Our model is self-supervised in a simulation that only uses basic objects of various colors and shapes but generalizes to novel objects in new environments. To further facilitate generalization, we propose two adaptation methods, adversarial adaption and one-grasp adaptation. Adversarial adaptation regulates the image encoder using augmented data of unlabeled images, whereas one-grasp adaptation updates the overall end-to-end model using augmented data from one grasp trial. Both adaptation methods are data-efficient and considerably improve instance grasping performance. Experimental results in both simulation and the real world demonstrate that our approach achieves over 81% instance grasping success rate on unknown objects, which outperforms several baselines by large margins.
Problem

Research questions and friction points this paper is trying to address.

Robot Learning
Cluttered Environment
Limited Data
Innovation

Methods, ideas, or system contributions that make the work stand out.

End-to-End Learning
Visual-Textual Association
Adaptive Skill Acquisition
🔎 Similar Papers
No similar papers found.