π€ AI Summary
This paper addresses performance bottlenecks of cloud-native vector search over remote storage by systematically comparing cluster-based and graph-based indexes under high-concurrency, high-recall, and high-dimensional workloads. Through a unified benchmarking framework, we reveal the graph indexβs substantial advantages in latency-sensitive and cache-constrained scenarios, and identify the root cause of local tuning parameter failure in cloud environments: a fundamental mismatch between index access granularity and cloud storage caching mechanisms. To resolve this, we propose a cloud-aware graph index parameter redesign method and an index-cache co-optimization strategy that dynamically adapts query granularity and data-fetch patterns to available cache capacity. Experimental evaluation on representative cloud configurations demonstrates an average 37% reduction in query latency and a 5.2% improvement in recall. This work is the first to rigorously characterize the cloud-native applicability boundaries of these two dominant indexing paradigms.
π Abstract
Vector search has been widely employed in recommender system and retrieval-augmented-generation pipelines, commonly performed with vector indexes to efficiently find similar items in large datasets. Recent growths in both data and task complexity have motivated placing vector indexes onto remote storage -- cloud-native vector search, which cloud providers have recently introduced services for. Yet, despite varying workload characteristics and various available vector index forms, providers default to using cluster-based indexes, which on paper do adapt well to differences between disk and cloud-based environment: their fetch granularities and lack of notable intra-query dependencies aligns with the large optimal fetch sizes and minimizes costly round-trips (i.e., as opposed to graph-based indexes) to remote storage, respectively. This paper systematically studies cloud-native vector search: What and how should indexes be built and used for on-cloud vector search? We analyze bottlenecks of two common index classes, cluster and graph indexes, on remote storage, and show that despite current standardized adoption of cluster indexes on the cloud, graph indexes are favored in workloads requiring high concurrency and recall, or operating on high-dimensional data or large datatypes. We further find that on-cloud search demands significantly different indexing and search parameterizations versus on-disk search for optimal performance. Finally, we incorporate existing cloud-based caching setups into vector search and find that certain index optimizations work against caching, and study how this can be mitigated to maximize gains under various available cache sizes.