DreamGrasp: Zero-Shot 3D Multi-Object Reconstruction from Partial-View Images for Robotic Manipulation

📅 2025-07-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of multi-object partial-view 3D reconstruction and instance-level recognition from sparse RGB images under severe occlusion and clutter. Method: We propose a zero-shot, depth-free, symmetry-agnostic, and annotation-free generative framework that leverages geometric priors from large-scale pretrained generative models. It employs contrastive learning for instance-aware object localization, followed by text-guided instance-level geometric refinement—realizing a three-stage optimization: coarse 3D reconstruction → instance segmentation → missing geometry completion. Contribution/Results: To our knowledge, this is the first approach to synergistically integrate generative priors, contrastive learning, and textual prompting—eliminating reliance on complete views, depth sensors, or supervised signals. Experiments demonstrate substantial improvements in 3D reconstruction accuracy under complex multi-object occlusion, and significant gains in downstream robotic tasks, including clutter removal and targeted grasping.

Technology Category

Application Category

📝 Abstract
Partial-view 3D recognition -- reconstructing 3D geometry and identifying object instances from a few sparse RGB images -- is an exceptionally challenging yet practically essential task, particularly in cluttered, occluded real-world settings where full-view or reliable depth data are often unavailable. Existing methods, whether based on strong symmetry priors or supervised learning on curated datasets, fail to generalize to such scenarios. In this work, we introduce DreamGrasp, a framework that leverages the imagination capability of large-scale pre-trained image generative models to infer the unobserved parts of a scene. By combining coarse 3D reconstruction, instance segmentation via contrastive learning, and text-guided instance-wise refinement, DreamGrasp circumvents limitations of prior methods and enables robust 3D reconstruction in complex, multi-object environments. Our experiments show that DreamGrasp not only recovers accurate object geometry but also supports downstream tasks like sequential decluttering and target retrieval with high success rates.
Problem

Research questions and friction points this paper is trying to address.

Reconstruct 3D geometry from partial-view RGB images
Identify object instances in cluttered, occluded environments
Generalize to real-world settings without full-view data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages pre-trained generative models for scene inference
Combines coarse 3D reconstruction and contrastive segmentation
Uses text-guided refinement for multi-object environments
🔎 Similar Papers
No similar papers found.