Learning 6-DoF Fine-grained Grasp Detection Based on Part Affordance Grounding

📅 2023-01-27
🏛️ arXiv.org
📈 Citations: 12
Influential: 0
📄 PDF
🤖 AI Summary
Existing robotic grasping research primarily targets object-level stable poses, lacking fine-grained, part-aware grasping capabilities that account for part geometry and functional affordances; progress is further hindered by the absence of large-scale, language-annotated 3D part-level datasets. Method: We propose LangPartGPD, a framework built upon LangSHAPE—the first language-guided 3D part-level grasping dataset—enabling joint geometric-semantic embedding of parts and language-conditioned generation of 6-DoF grasping poses. Our approach integrates 3D point cloud–language cross-modal alignment, LLM-driven planning, and perception–action closed-loop control, validated via sim-to-real transfer. Contribution/Results: Experiments demonstrate significant improvements in part localization accuracy under complex natural-language instructions, functional affordance reasoning, and real-world grasping success rates. This work advances interpretable, hierarchical embodied manipulation by bridging linguistic semantics with geometric part-level control.
📝 Abstract
Robotic grasping is a fundamental ability for a robot to interact with the environment. Current methods focus on how to obtain a stable and reliable grasping pose in object level, while little work has been studied on part (shape)-wise grasping which is related to fine-grained grasping and robotic affordance. Parts can be seen as atomic elements to compose an object, which contains rich semantic knowledge and a strong correlation with affordance. However, lacking a large part-wise 3D robotic dataset limits the development of part representation learning and downstream applications. In this paper, we propose a new large Language-guided SHape grAsPing datasEt (named LangSHAPE) to promote 3D part-level affordance and grasping ability learning. From the perspective of robotic cognition, we design a two-stage fine-grained robotic grasping framework (named LangPartGPD), including a novel 3D part language grounding model and a part-aware grasp pose detection model, in which explicit language input from human or large language models (LLMs) could guide a robot to generate part-level 6-DoF grasping pose with textual explanation. Our method combines the advantages of human-robot collaboration and LLMs' planning ability using explicit language as a symbolic intermediate. To evaluate the effectiveness of our proposed method, we perform 3D part grounding and fine-grained grasp detection experiments on both simulation and physical robot settings, following language instructions across different degrees of textual complexity. Results show our method achieves competitive performance in 3D geometry fine-grained grounding, object affordance inference, and 3D part-aware grasping tasks. Our dataset and code are available on our project website https://sites.google.com/view/lang-shape
Problem

Research questions and friction points this paper is trying to address.

Learning 6-DoF fine-grained grasp detection for robotic interaction
Addressing lack of large part-wise 3D robotic datasets
Enabling language-guided part-level affordance and grasping
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language-guided SHape grAsPing datasEt (LangSHAPE)
Two-stage fine-grained robotic grasping framework (LangPartGPD)
3D part language grounding and part-aware grasp detection
🔎 Similar Papers
No similar papers found.
Y
Yaoxian Song
School of Engineering, Westlake University
P
Penglei Sun
School of Computer Science, Fudan University
Y
Yi Ren
Tencent Robotcis X Lab, Tencent
Y
Yu Zheng
Tencent Robotcis X Lab, Tencent
Y
Yueying Zhang
School of Engineering, Westlake University