Optimizing Multi-Modal Models for Image-Based Shape Retrieval: The Role of Pre-Alignment and Hard Contrastive Learning

📅 2026-03-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the modality gap in image-to-3D shape retrieval by proposing a cross-modal retrieval method that requires neither multi-view rendering nor task-specific training. Leveraging pre-aligned image–point cloud encoders from ULIP and OpenShape, the approach constructs a shared embedding space and introduces a multimodal hard contrastive learning (HCL) strategy to enhance cross-domain alignment. Notably, it is the first to integrate large-scale multimodal pretrained encoders with hard contrastive loss, enabling effective retrieval under both zero-shot and supervised settings. The method achieves state-of-the-art performance across multiple benchmarks, significantly outperforming existing approaches in Top-1 and Top-10 accuracy, with the combination of OpenShape and Point-BERT yielding the best results.

Technology Category

Application Category

📝 Abstract
Image-based shape retrieval (IBSR) aims to retrieve 3D models from a database given a query image, hence addressing a classical task in computer vision, computer graphics, and robotics. Recent approaches typically rely on bridging the domain gap between 2D images and 3D shapes based on the use of multi-view renderings as well as task-specific metric learning to embed shapes and images into a common latent space. In contrast, we address IBSR through large-scale multi-modal pretraining and show that explicit view-based supervision is not required. Inspired by pre-aligned image--point-cloud encoders from ULIP and OpenShape that have been used for tasks such as 3D shape classification, we propose the use of pre-aligned image and shape encoders for zero-shot and standard IBSR by embedding images and point clouds into a shared representation space and performing retrieval via similarity search over compact single-embedding shape descriptors. This formulation allows skipping view synthesis and naturally enables zero-shot and cross-domain retrieval without retraining on the target database. We evaluate pre-aligned encoders in both zero-shot and supervised IBSR settings and additionally introduce a multi-modal hard contrastive loss (HCL) to further increase retrieval performance. Our evaluation demonstrates state-of-the-art performance, outperforming related methods on $Acc_{Top1}$ and $Acc_{Top10}$ for shape retrieval across multiple datasets, with best results observed for OpenShape combined with Point-BERT. Furthermore, training on our proposed multi-modal HCL yields dataset-dependent gains in standard instance retrieval tasks on shape-centric data, underscoring the value of pretraining and hard contrastive learning for 3D shape retrieval. The code will be made available via the project website.
Problem

Research questions and friction points this paper is trying to address.

Image-Based Shape Retrieval
3D shape retrieval
multi-modal models
zero-shot retrieval
cross-domain retrieval
Innovation

Methods, ideas, or system contributions that make the work stand out.

pre-aligned encoders
zero-shot retrieval
hard contrastive learning
multi-modal pretraining
image-based shape retrieval
🔎 Similar Papers
No similar papers found.
P
Paul Julius Kühn
Fraunhofer IGD, 64283 Darmstadt, Germany
C
Cedric Spengler
Fraunhofer IGD, 64283 Darmstadt, Germany
Michael Weinmann
Michael Weinmann
Delft University of Technology
Computer VisionComputer Graphics3D ReconstructionVirtual RealityMachine Learning
Arjan Kuijper
Arjan Kuijper
Professor at Fraunhofer IGD and TU Darmstadt
Computer VisionPattern RecognitionVisual ComputingMathematical ModelsScale Space
S
Saptarshi Neil Sinha
Fraunhofer IGD, 64283 Darmstadt, Germany