3D Feature Distillation with Object-Centric Priors

📅 2024-06-26
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenging problem of open-vocabulary language-to-3D grounding from single-view RGB-D input. To overcome limitations of existing 2D→3D CLIP feature transfer paradigms—namely their reliance on multi-view data and scene-specific fine-tuning—we propose a generalizable 3D vision-language feature distillation framework. Our method introduces a novel semantic-driven, object-level multi-view fusion mechanism that leverages instance segmentation masks and object-centric priors to prune redundant views. We further construct the first large-scale synthetic cluttered desktop dataset with multi-view annotations (15K scenes, 3,300+ objects), and integrate instance-level feature fusion, NeRF-based lightweight adaptation, and synthetic-data-driven training. Experiments demonstrate substantial improvements in 3D CLIP grounding accuracy and segmentation fidelity, zero-shot generalization to unseen desktop scenes, effective 3D instance segmentation, and successful deployment in language-guided robotic grasping.

Technology Category

Application Category

📝 Abstract
Grounding natural language to the physical world is a ubiquitous topic with a wide range of applications in computer vision and robotics. Recently, 2D vision-language models such as CLIP have been widely popularized, due to their impressive capabilities for open-vocabulary grounding in 2D images. Recent works aim to elevate 2D CLIP features to 3D via feature distillation, but either learn neural fields that are scene-specific and hence lack generalization, or focus on indoor room scan data that require access to multiple camera views, which is not practical in robot manipulation scenarios. Additionally, related methods typically fuse features at pixel-level and assume that all camera views are equally informative. In this work, we show that this approach leads to sub-optimal 3D features, both in terms of grounding accuracy, as well as segmentation crispness. To alleviate this, we propose a multi-view feature fusion strategy that employs object-centric priors to eliminate uninformative views based on semantic information, and fuse features at object-level via instance segmentation masks. To distill our object-centric 3D features, we generate a large-scale synthetic multi-view dataset of cluttered tabletop scenes, spawning 15k scenes from over 3300 unique object instances, which we make publicly available. We show that our method reconstructs 3D CLIP features with improved grounding capacity and spatial consistency, while doing so from single-view RGB-D, thus departing from the assumption of multiple camera views at test time. Finally, we show that our approach can generalize to novel tabletop domains and be re-purposed for 3D instance segmentation without fine-tuning, and demonstrate its utility for language-guided robotic grasping in clutter.
Problem

Research questions and friction points this paper is trying to address.

Elevating 2D CLIP features to generalizable 3D representations
Overcoming multi-view dependency in 3D feature distillation
Improving 3D grounding accuracy and segmentation crispness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Object-centric priors eliminate uninformative views
Fuse features at object-level via segmentation masks
Single-view RGB-D input for 3D feature distillation
🔎 Similar Papers
No similar papers found.