GRASPLAT: Enabling dexterous grasping through novel view synthesis

📅 2025-10-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limitation of dexterous multi-fingered robotic grasping caused by the scarcity of high-quality 3D scans in real-world scenarios, this paper proposes the first end-to-end RGB-only grasping framework. Methodologically, we introduce 3D Gaussian Splatting to grasping for the first time, synthesizing novel-view hand-object interaction images and jointly optimizing hand joint pose regression via photometric consistency loss—eliminating reliance on complete 3D geometry. Our technical contributions are: (1) high-fidelity, RGB-driven novel-view synthesis; (2) a photometric-loss-guided pose learning mechanism; and (3) an end-to-end training paradigm requiring no 3D annotations. Experiments on both synthetic and real-world datasets demonstrate that our method achieves up to a 36.9% improvement in grasping success rate over existing RGB-based approaches.

Technology Category

Application Category

📝 Abstract
Achieving dexterous robotic grasping with multi-fingered hands remains a significant challenge. While existing methods rely on complete 3D scans to predict grasp poses, these approaches face limitations due to the difficulty of acquiring high-quality 3D data in real-world scenarios. In this paper, we introduce GRASPLAT, a novel grasping framework that leverages consistent 3D information while being trained solely on RGB images. Our key insight is that by synthesizing physically plausible images of a hand grasping an object, we can regress the corresponding hand joints for a successful grasp. To achieve this, we utilize 3D Gaussian Splatting to generate high-fidelity novel views of real hand-object interactions, enabling end-to-end training with RGB data. Unlike prior methods, our approach incorporates a photometric loss that refines grasp predictions by minimizing discrepancies between rendered and real images. We conduct extensive experiments on both synthetic and real-world grasping datasets, demonstrating that GRASPLAT improves grasp success rates up to 36.9% over existing image-based methods. Project page: https://mbortolon97.github.io/grasplat/
Problem

Research questions and friction points this paper is trying to address.

Achieving dexterous robotic grasping with multi-fingered hands
Overcoming limitations of 3D scan dependency in real-world scenarios
Regressing hand joints through synthesized plausible grasp images
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses 3D Gaussian Splatting for novel view synthesis
Trains end-to-end with only RGB image data
Incorporates photometric loss to refine grasp predictions
🔎 Similar Papers
No similar papers found.