Exploiting Distribution Constraints for Scalable and Efficient Image Retrieval

📅 2024-10-09
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Image retrieval suffers from poor cross-dataset generalization and low efficiency due to high-dimensional embeddings. To address these challenges, we propose a distribution-constrained embedding optimization framework. First, we design an Autoencoder with Strong Variance Constraint (AE-SVC) to calibrate the spatial distribution of pre-trained embeddings (e.g., from DINOv2 or CLIP), thereby mitigating cosine similarity distortion. Second, we introduce Single-shot Similarity-space Distillation ((SS)²D), a novel method that adaptively compresses embedding dimensions while preserving discriminative power, achieving an optimal trade-off between compactness and retrieval accuracy. Evaluated on four standard benchmarks—including SoP and Pittsburgh30k—AE-SVC improves mean Average Precision (mAP) by up to 16% over baseline models; further applying (SS)²D yields an additional 10% mAP gain under low-dimensional embeddings (e.g., ≤128D). Our framework significantly enhances both scalability and inference efficiency of generic vision-language models for large-scale image retrieval.

Technology Category

Application Category

📝 Abstract
Image retrieval is crucial in robotics and computer vision, with downstream applications in robot place recognition and vision-based product recommendations. Modern retrieval systems face two key challenges: scalability and efficiency. State-of-the-art image retrieval systems train specific neural networks for each dataset, an approach that lacks scalability. Furthermore, since retrieval speed is directly proportional to embedding size, existing systems that use large embeddings lack efficiency. To tackle scalability, recent works propose using off-the-shelf foundation models. However, these models, though applicable across datasets, fall short in achieving performance comparable to that of dataset-specific models. Our key observation is that, while foundation models capture necessary subtleties for effective retrieval, the underlying distribution of their embedding space can negatively impact cosine similarity searches. We introduce Autoencoders with Strong Variance Constraints (AE-SVC), which, when used for projection, significantly improves the performance of foundation models. We provide an in-depth theoretical analysis of AE-SVC. Addressing efficiency, we introduce Single-shot Similarity Space Distillation ((SS)$_2$D), a novel approach to learn embeddings with adaptive sizes that offers a better trade-off between size and performance. We conducted extensive experiments on four retrieval datasets, including Stanford Online Products (SoP) and Pittsburgh30k, using four different off-the-shelf foundation models, including DinoV2 and CLIP. AE-SVC demonstrates up to a $16%$ improvement in retrieval performance, while (SS)$_2$D shows a further $10%$ improvement for smaller embedding sizes.
Problem

Research questions and friction points this paper is trying to address.

Improves scalability of image retrieval using foundation models.
Enhances efficiency by optimizing embedding size and performance trade-off.
Addresses distribution constraints in embedding space for better retrieval accuracy.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Autoencoders with Strong Variance Constraints improve retrieval.
Single-shot Similarity Space Distillation optimizes embedding sizes.
AE-SVC and (SS)2D enhance foundation model performance.
🔎 Similar Papers
No similar papers found.
Mohammad Omama
Mohammad Omama
The University of Texas at Austin
RoboticsMachine Learning
P
Po-han Li
The University of Texas at Austin
S
Sandeep Chinchali
The University of Texas at Austin