Consensus-Driven Uncertainty for Robotic Grasping based on RGB Perception

📅 2025-06-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In RGB image–driven robotic grasping, pose estimators often exhibit overconfidence and lack reliable uncertainty quantification, leading to grasp failures under high uncertainty. Method: We propose a consensus-driven uncertainty modeling framework that jointly trains on multiple objects to enable cross-object shared representation learning; a lightweight deep network is trained end-to-end under joint supervision from real-image pose estimation and simulated grasp success labels to directly predict grasp success probability. Crucially, uncertainty quantification is explicitly embedded at the front end of the grasping decision pipeline—bypassing post-hoc correction. Results: Experiments demonstrate robust performance under high appearance and shape variability, yielding significant improvements in grasp success rate and decision reliability. The results validate that consensus-based uncertainty modeling enhances generalization to downstream robotic manipulation tasks.

Technology Category

Application Category

📝 Abstract
Deep object pose estimators are notoriously overconfident. A grasping agent that both estimates the 6-DoF pose of a target object and predicts the uncertainty of its own estimate could avoid task failure by choosing not to act under high uncertainty. Even though object pose estimation improves and uncertainty quantification research continues to make strides, few studies have connected them to the downstream task of robotic grasping. We propose a method for training lightweight, deep networks to predict whether a grasp guided by an image-based pose estimate will succeed before that grasp is attempted. We generate training data for our networks via object pose estimation on real images and simulated grasping. We also find that, despite high object variability in grasping trials, networks benefit from training on all objects jointly, suggesting that a diverse variety of objects can nevertheless contribute to the same goal.
Problem

Research questions and friction points this paper is trying to address.

Overconfidence in deep object pose estimators for grasping
Lack of uncertainty-aware grasping methods in robotics
Need for diverse object training to improve grasp success
Innovation

Methods, ideas, or system contributions that make the work stand out.

Predict grasp success using lightweight deep networks
Train with real images and simulated grasping data
Joint training on diverse objects improves performance
🔎 Similar Papers
No similar papers found.