AO-Grasp: Articulated Object Grasp Generation

📅 2023-10-24
🏛️ IEEE/RJS International Conference on Intelligent RObots and Systems
📈 Citations: 5
Influential: 2
📄 PDF
🤖 AI Summary
This work addresses the challenge of enabling robots to execute stable, executable 6-DoF grasps on articulated objects (e.g., cabinet doors, appliances) from partial point cloud observations. We propose the first end-to-end framework that directly regresses feasible 6-DoF grasp poses from single-object-segmented local point clouds—without requiring part segmentation or hand-crafted heuristics. Our core method comprises an operability-aware grasp point predictor and a pose matching mechanism. To support learning-based approaches, we introduce the first large-scale synthetic dataset for operable grasping on articulated objects, containing 78K samples. Evaluated in simulation, our deep point cloud model achieves a 45.0% grasp success rate—a 10.0% relative improvement over prior methods. On 120 complex real-world articulated objects, it attains 67.5% success, substantially outperforming the baseline (33.3%). The framework demonstrates significant advances in robust, data-driven operable grasping under partial observability.
📝 Abstract
We introduce AO-Grasp, a grasp proposal method that generates 6 DoF grasps that enable robots to interact with articulated objects, such as opening and closing cabinets and appliances. AO-Grasp consists of two main contributions: the AO-Grasp Model and the AO-Grasp Dataset. Given a segmented partial point cloud of a single articulated object, the AO-Grasp Model predicts the best grasp points on the object with an Actionable Grasp Point Predictor. Then, it finds corresponding grasp orientations for each of these points, resulting in stable and actionable grasp proposals. We train the AO-Grasp Model on our new AO-Grasp Dataset, which contains 78K actionable parallel-jaw grasps on synthetic articulated objects. In simulation, AO-Grasp achieves a 45.0% grasp success rate, whereas the highest performing baseline achieves a 35.0% success rate. Additionally, we evaluate AO-Grasp on 120 real-world scenes of objects with varied geometries, articulation axes, and joint states, where AO-Grasp produces successful grasps on 67.5% of scenes, while the baseline only produces successful grasps on 33.3% of scenes. To the best of our knowledge, AO-Grasp is the first method for generating 6 DoF grasps on articulated objects directly from partial point clouds without requiring part detection or hand-designed grasp heuristics. The AO-Grasp Dataset and a pre-trained AO-Grasp model are available at our project website: https://stanford-iprl-lab.github.io/ao-grasp/.
Problem

Research questions and friction points this paper is trying to address.

Generates 6 DoF grasps for articulated objects
Predicts grasp points from partial point clouds
Improves grasp success rates over baselines
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates 6 DoF grasps for articulated objects
Uses Actionable Grasp Point Predictor
Trained on 78K synthetic grasp dataset
🔎 Similar Papers
No similar papers found.