BiDexGrasp: Coordinated Bimanual Dexterous Grasps across Object Geometries and Sizes

📅 2026-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of generalizing bimanual robotic dexterous grasping to objects with diverse geometries and sizes, hindered by the lack of large-scale datasets and efficient grasp generation models. The authors introduce a large-scale bimanual grasping dataset comprising 6,351 objects and 9.7 million annotated grasps, along with a novel grasp generation framework that integrates coordination awareness and geometric size adaptability. The framework employs a two-stage synthesis strategy featuring region-based grasp initialization, decoupled force-closure optimization, bimanual coordination modeling, and a size-adaptive generation network to efficiently produce physically feasible grasps. Experimental results demonstrate that the proposed method significantly outperforms existing approaches in both simulation and real-world settings, enabling high-quality grasping of previously unseen objects.
📝 Abstract
Bimanual dexterous grasping is a fundamental and promising area in robotics, yet its progress is constrained by the lack of comprehensive datasets and powerful generation models. In this work, we propose BiDexGrasp, consists of a large-scale bimanual dexterous grasp dataset and a novel generation model. For dataset, we propose a novel bimanual grasp synthesis pipeline to efficiently annotate physically feasible data for dataset construction. This pipeline addresses the challenges of high-dimensional bimanual grasping through a two-stage synthesis strategy of efficient region-based grasp initialization and decoupled force-closure grasp optimization. Powered by this pipeline, we construct a large-scale bimanual dexterous grasp dataset, comprising 6351 diverse objects with sizes ranging from 30 to 80 cm, along with 9.7 million annotated grasp data. Based on this dataset, we further introduce a bimanual-coordinated and geometry-size-adaptive dexterous grasping generation framework. The framework lies in two key designs: a bimanual coordination module and a geometry-size-adaptive grasp generation strategy to generate coordinated and high-quality grasps on unseen objects. Extensive experiments conducted in both simulation and real world demonstrate the superior performance of our proposed data synthesis pipeline and learned generative framework.
Problem

Research questions and friction points this paper is trying to address.

bimanual grasping
dexterous manipulation
grasp dataset
object geometry
size variation
Innovation

Methods, ideas, or system contributions that make the work stand out.

bimanual dexterous grasping
grasp synthesis pipeline
force-closure optimization
geometry-size-adaptive generation
coordinated bimanual manipulation
🔎 Similar Papers
No similar papers found.
M
Mu Lin
School of Computer Science and Engineering, Sun Yat-sen University
Yi-Lin Wei
Yi-Lin Wei
Sun Yat-sen University
J
Jiaxuan Chen
School of Computer Science and Engineering, Sun Yat-sen University
Y
Yuhao Lin
School of Computer Science and Engineering, Sun Yat-sen University
S
Shuoyu Chen
School of Computer Science and Engineering, Sun Yat-sen University
J
Jiangran Lyu
School of Computer Science, Peking University
Jiayi Chen
Jiayi Chen
Peking University
Robotics3D Vision
Y
Yansong Tang
Shenzhen International Graduate School, Tsinghua University
He Wang
He Wang
Assistant Professor of Computer Science, Peking University
Embodied AIComputer VisionRobotics
Wei-Shi Zheng
Wei-Shi Zheng
Professor @ SUN YAT-SEN UNIVERSITY
Computer VisionPattern RecognitionMachine Learning