3D-MVP: 3D Multiview Pretraining for Robotic Manipulation

📅 2024-06-26
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited 3D scene understanding capability in robotic manipulation and the restriction of existing vision pretraining to 2D images, this paper introduces the first 3D multi-view masked autoencoding pretraining paradigm tailored for robotic grasping. Methodologically, we extend the Masked Autoencoder (MAE) framework to multi-view 3D inputs for the first time; decouple the Robotic View Transformer (RVT) architecture to enable dedicated pretraining of the visual encoder; and leverage large-scale 3D datasets (e.g., Objaverse) with multi-view rendering and geometric alignment. Experiments demonstrate that our framework significantly outperforms 2D pretrained baselines across multiple simulated robotic manipulation tasks: achieving an average 12.7% improvement in action prediction accuracy and a 23.4% gain in cross-object generalization—thereby validating the critical role of 3D multi-view pretraining in enhancing the generalizability of vision-based robotic policies.

Technology Category

Application Category

📝 Abstract
Recent works have shown that visual pretraining on egocentric datasets using masked autoencoders (MAE) can improve generalization for downstream robotics tasks. However, these approaches pretrain only on 2D images, while many robotics applications require 3D scene understanding. In this work, we propose 3D-MVP, a novel approach for 3D Multi-View Pretraining using masked autoencoders. We leverage Robotic View Transformer (RVT), which uses a multi-view transformer to understand the 3D scene and predict gripper pose actions. We split RVT's multi-view transformer into visual encoder and action decoder, and pretrain its visual encoder using masked autoencoding on large-scale 3D datasets such as Objaverse. We evaluate 3D-MVP on a suite of virtual robot manipulation tasks and demonstrate improved performance over baselines. Our results suggest that 3D-aware pretraining is a promising approach to improve generalization of vision-based robotic manipulation policies. Project site: https://jasonqsy.github.io/3DMVP
Problem

Research questions and friction points this paper is trying to address.

Improving robotic manipulation via 3D-aware visual pretraining
Enhancing 3D scene understanding for robotics using multi-view transformers
Pretraining visual encoders on large-scale 3D datasets for better generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

3D Multi-View Pretraining with masked autoencoders
Robotic View Transformer for 3D scene understanding
Pretraining on large-scale 3D datasets like Objaverse
🔎 Similar Papers
No similar papers found.