🤖 AI Summary
This work proposes a geometry-driven approach to task-oriented grasping that addresses the limited generalization of existing methods in functional part identification and grasp reasoning. By leveraging a large language model to construct an object-part-task semantic ontology, the method guides the selection of task-relevant functional parts. Functional parts are identified through geometric analysis of point clouds, and a sampling-based distance metric enables cross-object similarity matching for imitation-based grasp planning. Crucially, the approach eliminates reliance on visual semantic features, achieving high-precision functional part recognition and grasp generation in real-world experiments. It demonstrates significantly improved generalization to unseen object categories, thereby validating its robustness and adaptability in diverse task-oriented manipulation scenarios.
📝 Abstract
Task-oriented grasping (TOG) is more challenging than simple object grasping because it requires precise identification of object parts and careful selection of grasping areas to ensure effective and robust manipulation. While recent approaches have trained large-scale vision-language models to integrate part-level object segmentation with task-aware grasp planning, their instability in part recognition and grasp inference limits their ability to generalize across diverse objects and tasks. To address this issue, we introduce a novel, geometry-centric strategy for more generalizable TOG that does not rely on semantic features from visual recognition, effectively overcoming the viewpoint sensitivity of model-based approaches. Our main proposals include: 1) an object-part-task ontology for functional part selection based on intuitive human commands, constructed using a Large Language Model (LLM); 2) a sampling-based geometric analysis method for identifying the selected object part from observed point clouds, incorporating multiple point distribution and distance metrics; and 3) a similarity matching framework for imitative grasp planning, utilizing similar known objects with pre-existing segmentation and grasping knowledge as references to guide the planning for unknown targets. We validate the high accuracy of our approach in functional part selection, identification, and grasp generation through real-world experiments. Additionally, we demonstrate the method's generalization capabilities to novel-category objects by extending existing ontological knowledge, showcasing its adaptability to a broad range of objects and tasks.