AffordGrasp: In-Context Affordance Reasoning for Open-Vocabulary Task-Oriented Grasping in Clutter

📅 2025-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses open-vocabulary, task-oriented robotic grasping in cluttered scenes. We propose the first vision-language model (VLM)-based framework for implicit instruction-driven affordance reasoning—requiring no explicit task annotations. Our method jointly leverages in-context prompting, CLIP/LLaVA for semantic understanding, a learnable visual grounding module, and a geometry-aware grasp pose generation network to implicitly infer task goals from natural language instructions, localize relevant objects, and output functionally consistent part-level grasp poses. Key contributions include zero-shot generalization to unseen objects and open-vocabulary tasks, eliminating reliance on supervised data of fixed task–object pairs. Evaluated in both simulation and real-world settings, our approach achieves state-of-the-art performance, improving task success rate by 27.3% over baselines and successfully generalizing to over 100 novel objects and 50+ open-ended instructions.

Technology Category

Application Category

📝 Abstract
Inferring the affordance of an object and grasping it in a task-oriented manner is crucial for robots to successfully complete manipulation tasks. Affordance indicates where and how to grasp an object by taking its functionality into account, serving as the foundation for effective task-oriented grasping. However, current task-oriented methods often depend on extensive training data that is confined to specific tasks and objects, making it difficult to generalize to novel objects and complex scenes. In this paper, we introduce AffordGrasp, a novel open-vocabulary grasping framework that leverages the reasoning capabilities of vision-language models (VLMs) for in-context affordance reasoning. Unlike existing methods that rely on explicit task and object specifications, our approach infers tasks directly from implicit user instructions, enabling more intuitive and seamless human-robot interaction in everyday scenarios. Building on the reasoning outcomes, our framework identifies task-relevant objects and grounds their part-level affordances using a visual grounding module. This allows us to generate task-oriented grasp poses precisely within the affordance regions of the object, ensuring both functional and context-aware robotic manipulation. Extensive experiments demonstrate that AffordGrasp achieves state-of-the-art performance in both simulation and real-world scenarios, highlighting the effectiveness of our method. We believe our approach advances robotic manipulation techniques and contributes to the broader field of embodied AI. Project website: https://eqcy.github.io/affordgrasp/.
Problem

Research questions and friction points this paper is trying to address.

Generalizing task-oriented grasping to novel objects and complex scenes.
Inferring tasks from implicit user instructions for intuitive human-robot interaction.
Generating precise task-oriented grasp poses within affordance regions.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages vision-language models for affordance reasoning
Infers tasks from implicit user instructions
Generates precise task-oriented grasp poses
🔎 Similar Papers
No similar papers found.