🤖 AI Summary
Existing ANN benchmark datasets are severely misaligned with modern embedding applications such as RAG, failing to reflect index performance under realistic distribution shifts. To address this gap, we propose the first vector indexing benchmark framework specifically designed for embedding applications. Our method innovatively introduces out-of-distribution (OOD) query sets to model practical retrieval shifts and constructs an end-to-end embedding generation pipeline covering mainstream dense models (e.g., BERT, CLIP) and representative downstream tasks. The framework integrates major deep learning frameworks (PyTorch/TensorFlow) and indexing libraries (FAISS, Annoy, HNSW), along with standardized evaluation protocols. We systematically evaluate 21 indexing methods across 12 in-distribution and 6 OOD datasets, revealing—for the first time—systematic performance degradation under distribution shift. The complete, reproducible toolchain is open-sourced.
📝 Abstract
Approximate nearest neighbor (ANN) search is a performance-critical component of many machine learning pipelines. Rigorous benchmarking is essential for evaluating the performance of vector indexes for ANN search. However, the datasets of the existing benchmarks are no longer representative of the current applications of ANN search. Hence, there is an urgent need for an up-to-date set of benchmarks. To this end, we introduce Vector Index Benchmark for Embeddings (VIBE), an open source project for benchmarking ANN algorithms. VIBE contains a pipeline for creating benchmark datasets using dense embedding models characteristic of modern applications, such as retrieval-augmented generation (RAG). To replicate real-world workloads, we also include out-of-distribution (OOD) datasets where the queries and the corpus are drawn from different distributions. We use VIBE to conduct a comprehensive evaluation of SOTA vector indexes, benchmarking 21 implementations on 12 in-distribution and 6 out-of-distribution datasets.