🤖 AI Summary
To address the limitations of CNNs in modeling global geometric relationships and their heavy reliance on large-scale labeled data for 3D shape recognition, this paper proposes a multi-view contrastive learning framework based on Vision Transformers (ViT). Methodologically, it introduces the first systematic integration of supervised and self-supervised contrastive objectives to jointly optimize global semantic representations and local discriminative features; ViT serves as the backbone for extracting multi-view features, enabling end-to-end training on benchmarks such as ModelNet. Key contributions include: (1) empirical validation of ViT’s structural advantages over CNNs for 3D shape understanding; (2) a synergistic contrastive learning paradigm that substantially reduces dependency on labeled data; and (3) state-of-the-art performance—90.6% accuracy on ModelNet10—significantly outperforming mainstream CNN-based baselines.
📝 Abstract
This paper addresses the challenges in representation learning of 3D shape features by investigating state-of-the-art backbones paired with both contrastive supervised and self-supervised learning objectives. Computer vision methods struggle with recognizing 3D objects from 2D images, often requiring extensive labeled data and relying on Convolutional Neural Networks (CNNs) that may overlook crucial shape relationships. Our work demonstrates that Vision Transformers (ViTs) based architectures, when paired with modern contrastive objectives, achieve promising results in multi-view 3D analysis on our downstream tasks, unifying contrastive and 3D shape understanding pipelines. For example, supervised contrastive losses reached about 90.6% accuracy on ModelNet10. The use of ViTs and contrastive learning, leveraging ViTs' ability to understand overall shapes and contrastive learning's effectiveness, overcomes the need for extensive labeled data and the limitations of CNNs in capturing crucial shape relationships. The success stems from capturing global shape semantics via ViTs and refining local discriminative features through contrastive optimization. Importantly, our approach is empirical, as it is grounded on extensive experimental evaluation to validate the effectiveness of combining ViTs with contrastive objectives for 3D representation learning.