CL3R: 3D Reconstruction and Contrastive Learning for Enhanced Robotic Manipulation Representations

📅 2025-07-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current robotic perception modules rely heavily on 2D foundation models, exhibiting strong semantic understanding but weak 3D spatial modeling capability and poor generalization to multi-view scenarios—limiting fine-grained manipulation performance. To address this, we propose a unified pretraining framework that jointly encodes 3D geometry and semantics. Specifically, we design a Point Cloud Masked Autoencoder (Point-MAE) integrated with contrastive learning to achieve multi-view point cloud coordinate alignment and stochastic fusion, mitigating view ambiguity; additionally, we transfer semantic knowledge from pretrained 2D foundation models to enrich 3D representations. Evaluated on both simulation and real-world robotic platforms, our method significantly improves cross-view perception robustness and policy control accuracy in fine-grained manipulation tasks—including peg insertion and block stacking. This work establishes a scalable, 3D–2D co-perception paradigm for embodied intelligence.

Technology Category

Application Category

📝 Abstract
Building a robust perception module is crucial for visuomotor policy learning. While recent methods incorporate pre-trained 2D foundation models into robotic perception modules to leverage their strong semantic understanding, they struggle to capture 3D spatial information and generalize across diverse camera viewpoints. These limitations hinder the policy's effectiveness, especially in fine-grained robotic manipulation scenarios. To address these challenges, we propose CL3R, a novel 3D pre-training framework designed to enhance robotic manipulation policies. Our method integrates both spatial awareness and semantic understanding by employing a point cloud Masked Autoencoder to learn rich 3D representations while leveraging pre-trained 2D foundation models through contrastive learning for efficient semantic knowledge transfer. Additionally, we propose a 3D visual representation pre-training framework for robotic tasks. By unifying coordinate systems across datasets and introducing random fusion of multi-view point clouds, we mitigate camera view ambiguity and improve generalization, enabling robust perception from novel viewpoints at test time. Extensive experiments in both simulation and the real world demonstrate the superiority of our method, highlighting its effectiveness in visuomotor policy learning for robotic manipulation.
Problem

Research questions and friction points this paper is trying to address.

Enhancing robotic manipulation with 3D spatial awareness
Improving generalization across diverse camera viewpoints
Integrating semantic understanding with 3D representations
Innovation

Methods, ideas, or system contributions that make the work stand out.

3D pre-training with point cloud Masked Autoencoder
Contrastive learning for semantic knowledge transfer
Multi-view point cloud fusion for generalization
🔎 Similar Papers
No similar papers found.
W
Wenbo Cui
Institute of Automation, Chinese Academy of Sciences
Chengyang Zhao
Chengyang Zhao
Carnegie Mellon University
RoboticsMachine Learning3D Computer Vision
Y
Yuhui Chen
Institute of Automation, Chinese Academy of Sciences
H
Haoran Li
Institute of Automation, Chinese Academy of Sciences
Z
Zhizheng Zhang
Beijing Academy of Artificial Intelligence
Dongbin Zhao
Dongbin Zhao
Institute of Automation, Chinese Academy of Sciences
Deep Reinforcement LearningAdaptive Dynamic ProgrammingGame AISmart drivingrobotics
H
He Wang
Beijing Academy of Artificial Intelligence