Task-Oriented 6-DoF Grasp Pose Detection in Clutters

📅 2025-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the task-oriented 6D grasp detection problem (TO6DGC) in realistic cluttered scenes. To tackle this challenge, we propose OSTG—the first end-to-end, single-stage model tailored for human–robot collaboration. To support TO6DGC research, we introduce 6DTG, the first large-scale task-oriented 6D grasp dataset, comprising 4,391 cluttered scenes and over two million 6D grasp poses annotated with fine-grained task labels (e.g., cutting, handing, twisting). Methodologically, OSTG innovatively integrates task-conditioned point cloud sampling, task-embedding-guided 6D pose generation, and multi-task joint supervision. On 6DTG, OSTG significantly outperforms existing methods across all metrics. Real-world robotic experiments further demonstrate its strong capability to identify task-sensitive grasp points and estimate high-precision 6D poses—enabling robust, semantically aware grasping in collaborative settings.

Technology Category

Application Category

📝 Abstract
In general, humans would grasp an object differently for different tasks, e.g.,"grasping the handle of a knife to cut"vs."grasping the blade to hand over". In the field of robotic grasp pose detection research, some existing works consider this task-oriented grasping and made some progress, but they are generally constrained by low-DoF gripper type or non-cluttered setting, which is not applicable for human assistance in real life. With an aim to get more general and practical grasp models, in this paper, we investigate the problem named Task-Oriented 6-DoF Grasp Pose Detection in Clutters (TO6DGC), which extends the task-oriented problem to a more general 6-DOF Grasp Pose Detection in Cluttered (multi-object) scenario. To this end, we construct a large-scale 6-DoF task-oriented grasping dataset, 6-DoF Task Grasp (6DTG), which features 4391 cluttered scenes with over 2 million 6-DoF grasp poses. Each grasp is annotated with a specific task, involving 6 tasks and 198 objects in total. Moreover, we propose One-Stage TaskGrasp (OSTG), a strong baseline to address the TO6DGC problem. Our OSTG adopts a task-oriented point selection strategy to detect where to grasp, and a task-oriented grasp generation module to decide how to grasp given a specific task. To evaluate the effectiveness of OSTG, extensive experiments are conducted on 6DTG. The results show that our method outperforms various baselines on multiple metrics. Real robot experiments also verify that our OSTG has a better perception of the task-oriented grasp points and 6-DoF grasp poses.
Problem

Research questions and friction points this paper is trying to address.

Task-Oriented 6-DoF Grasp Pose Detection
Cluttered multi-object scenarios
General and practical grasp models
Innovation

Methods, ideas, or system contributions that make the work stand out.

6-DoF Grasp Pose Detection
Task-Oriented Point Selection
One-Stage TaskGrasp Model
🔎 Similar Papers
No similar papers found.