DHAGrasp: Synthesizing Affordance-Aware Dual-Hand Grasps with Text Instructions

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address weak semantic awareness and scarce annotated data in bimanual robotic grasping, this paper proposes a text-guided bimanual affordance-aware grasping synthesis framework. Methodologically, we construct a bimanual affordance representation integrating hand-object symmetry and semantic part priors, and design a two-stage learning paradigm: first generating a large-scale synthetic dataset via the SymOpt pipeline, then fine-tuning the grasping generation model with text instructions and incorporating a bimanual affordance detection network for fine-grained semantic alignment. Our contributions are threefold: (1) the first semantic-consistent bimanual grasping generation framework for unseen objects; (2) the first large-scale bimanual affordance-aware synthetic dataset; and (3) comprehensive experiments demonstrating significant improvements over state-of-the-art baselines in grasping diversity, semantic consistency, and cross-object generalization.

Technology Category

Application Category

📝 Abstract
Learning to generate dual-hand grasps that respect object semantics is essential for robust hand-object interaction but remains largely underexplored due to dataset scarcity. Existing grasp datasets predominantly focus on single-hand interactions and contain only limited semantic part annotations. To address these challenges, we introduce a pipeline, SymOpt, that constructs a large-scale dual-hand grasp dataset by leveraging existing single-hand datasets and exploiting object and hand symmetries. Building on this, we propose a text-guided dual-hand grasp generator, DHAGrasp, that synthesizes Dual-Hand Affordance-aware Grasps for unseen objects. Our approach incorporates a novel dual-hand affordance representation and follows a two-stage design, which enables effective learning from a small set of segmented training objects while scaling to a much larger pool of unsegmented data. Extensive experiments demonstrate that our method produces diverse and semantically consistent grasps, outperforming strong baselines in both grasp quality and generalization to unseen objects. The project page is at https://quanzhou-li.github.io/DHAGrasp/.
Problem

Research questions and friction points this paper is trying to address.

Generating dual-hand grasps respecting object semantics
Addressing dataset scarcity for dual-hand interactions
Creating affordance-aware grasps guided by text instructions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Constructs large-scale dataset using object symmetries
Proposes text-guided dual-hand grasp generation model
Introduces novel dual-hand affordance representation design
🔎 Similar Papers
No similar papers found.