G-DexGrasp: Generalizable Dexterous Grasping Synthesis Via Part-Aware Prior Retrieval and Prior-Assisted Generation

📅 2025-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses zero-shot dexterous grasping—generating physically plausible, high-quality grasp configurations for unseen object categories under diverse natural-language instructions. We propose a retrieval-augmented generation framework featuring a novel dual-prior retrieval mechanism that jointly incorporates “fine-grained contact parts” and “function-oriented grasp distributions,” embedding semantic priors throughout retrieval, generation, and optimization. Our method integrates contrastive learning–driven part-level retrieval, conditional diffusion modeling, and prior-distribution–guided optimization regularization. Evaluated on multiple zero-shot object benchmarks, it significantly outperforms state-of-the-art methods: improving grasp plausibility by 23.6%, task-alignment accuracy by 31.4%, and achieving the first end-to-end generalization from language instructions to function-aware grasps.

Technology Category

Application Category

📝 Abstract
Recent advances in dexterous grasping synthesis have demonstrated significant progress in producing reasonable and plausible grasps for many task purposes. But it remains challenging to generalize to unseen object categories and diverse task instructions. In this paper, we propose G-DexGrasp, a retrieval-augmented generation approach that can produce high-quality dexterous hand configurations for unseen object categories and language-based task instructions. The key is to retrieve generalizable grasping priors, including the fine-grained contact part and the affordance-related distribution of relevant grasping instances, for the following synthesis pipeline. Specifically, the fine-grained contact part and affordance act as generalizable guidance to infer reasonable grasping configurations for unseen objects with a generative model, while the relevant grasping distribution plays as regularization to guarantee the plausibility of synthesized grasps during the subsequent refinement optimization. Our comparison experiments validate the effectiveness of our key designs for generalization and demonstrate the remarkable performance against the existing approaches. Project page: https://g-dexgrasp.github.io/
Problem

Research questions and friction points this paper is trying to address.

Generalize grasping to unseen object categories
Handle diverse language-based task instructions
Ensure plausibility of synthesized grasps
Innovation

Methods, ideas, or system contributions that make the work stand out.

Retrieval-augmented generation for dexterous grasping
Part-aware prior retrieval for unseen objects
Prior-assisted generative model for grasp synthesis
🔎 Similar Papers
No similar papers found.
J
Juntao Jian
Dalian University of Technology
X
Xiuping Liu
Dalian University of Technology
Z
Zixuan Chen
Dalian University of Technology
Manyi Li
Manyi Li
Shandong University
3D modelingsurface reconstructionindoor scene generation
J
Jian Liu
Shenyang University of Technology
R
Ruizhen Hu
Shenzhen University