Duoduo CLIP: Efficient 3D Understanding with Multi-View Images

📅 2024-06-17
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses key limitations of point-cloud-based 3D shape representation—namely, high computational cost, viewpoint sensitivity, and reliance on geometric primitives. We propose a lightweight, multi-view image-driven framework for 3D understanding that bypasses point clouds entirely, directly encoding RGB images from multiple viewpoints. Our method integrates CLIP’s 2D vision-language priors and introduces a cross-view self-attention mechanism coupled with a permutation-invariant architecture to achieve pose- and order-agnostic 3D representation learning. End-to-end contrastive learning optimizes text-shape alignment. Contributions include: (1) the first native multi-view image-based lightweight 3D representation paradigm (87M parameters), supporting variable numbers of input views; (2) substantial computational reduction (57 A5000 GPU-hours vs. 480 A100 GPU-hours); and (3) state-of-the-art performance on real-image benchmarks, surpassing leading point-cloud methods in fine-grained text-shape retrieval accuracy and cross-domain generalization.

Technology Category

Application Category

📝 Abstract
We introduce Duoduo CLIP, a model for 3D representation learning that learns shape encodings from multi-view images instead of point clouds. The choice of multi-view images allows us to leverage 2D priors from off-the-shelf CLIP models to facilitate fine-tuning with 3D data. Our approach not only shows better generalization compared to existing point cloud methods, but also reduces GPU requirements and training time. In addition, the model is modified with cross-view attention to leverage information across multiple frames of the object which further boosts performance. Notably, our model is permutation invariant to the order of multi-view images while being pose-free. Compared to the current SOTA point cloud method that requires 480 A100 hours to train 1 billion model parameters we only require 57 A5000 hours and 87 million parameters. Multi-view images also provide more flexibility including being able to encode objects with a variable number of images, and performance scales when more views are used. In contrast, point cloud based methods require an entire scan or model of the object. We showcase this flexibility with benchmarks from images of real-world objects. Our model also achieves better performance in more fine-grained text to shape retrieval, demonstrating better text-and-shape alignment than point cloud based models.
Problem

Research questions and friction points this paper is trying to address.

Efficient 3D understanding using multi-view images.
Reduced GPU requirements and training time.
Improved text-to-shape retrieval and alignment.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses multi-view images for 3D representation learning
Leverages 2D priors from CLIP for 3D fine-tuning
Incorporates cross-view attention for enhanced performance
🔎 Similar Papers
No similar papers found.
H
Han-Hung Lee
Simon Fraser University
Y
Yiming Zhang
Simon Fraser University
A
Angel X. Chang
Simon Fraser University, Canada CIFAR AI Chair, Amii