D3Grasp: Diverse and Deformable Dexterous Grasping for General Objects

📅 2025-09-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the robustness deficiency in dexterous grasping of generic and deformable objects—caused by high-dimensional action spaces and perceptual uncertainty—this paper proposes a multimodal perception-guided asymmetric reinforcement learning framework. The method introduces a unified vision-tactile representation, incorporates privileged information during training to improve sample efficiency while preserving deployment fidelity, and designs a grasp synthesis strategy that jointly ensures high contact quality, zero geometric penetration, and kinematic feasibility. Leveraging simulation-to-reality transfer, high-fidelity contact modeling, and constrained optimization, the approach achieves a 95.1% average grasp success rate on large-scale rigid and deformable objects in real-world settings—significantly outperforming prior methods and establishing new state-of-the-art performance.

Technology Category

Application Category

📝 Abstract
Achieving diverse and stable dexterous grasping for general and deformable objects remains a fundamental challenge in robotics, due to high-dimensional action spaces and uncertainty in perception. In this paper, we present D3Grasp, a multimodal perception-guided reinforcement learning framework designed to enable Diverse and Deformable Dexterous Grasping. We firstly introduce a unified multimodal representation that integrates visual and tactile perception to robustly grasp common objects with diverse properties. Second, we propose an asymmetric reinforcement learning architecture that exploits privileged information during training while preserving deployment realism, enhancing both generalization and sample efficiency. Third, we meticulously design a training strategy to synthesize contact-rich, penetration-free, and kinematically feasible grasps with enhanced adaptability to deformable and contact-sensitive objects. Extensive evaluations confirm that D3Grasp delivers highly robust performance across large-scale and diverse object categories, and substantially advances the state of the art in dexterous grasping for deformable and compliant objects, even under perceptual uncertainty and real-world disturbances. D3Grasp achieves an average success rate of 95.1% in real-world trials,outperforming prior methods on both rigid and deformable objects benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Achieving diverse and stable dexterous grasping for general deformable objects
Addressing high-dimensional action spaces and uncertainty in robotic perception
Developing robust grasping methods for deformable and contact-sensitive objects
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal perception integrating vision and touch
Asymmetric reinforcement learning with privileged information
Training strategy for contact-rich penetration-free grasps
🔎 Similar Papers
No similar papers found.