🤖 AI Summary
Existing graph-based approximate nearest neighbor search (ANNS) algorithms suffer in production environments from inefficient random memory access, high computational overhead in distance computation, and sensitivity to hyperparameter tuning—requiring frequent index rebuilds. To address these limitations, we propose three key innovations: (1) a novel index-rebuild-free automated hyperparameter search mechanism; (2) a cache-friendly vector organization scheme coupled with an L3-cache-aware prefetching strategy; and (3) hardware-aware dynamic low-precision distance computation, integrating scalar quantization with AVX-512 acceleration. Evaluated on real-world datasets, our approach achieves up to 4× higher query throughput than HNSWlib while preserving full recall accuracy. The design significantly improves both system throughput and deployment efficiency—eliminating costly index reconstruction cycles and reducing memory bandwidth pressure without compromising retrieval quality.
📝 Abstract
Approximate nearest neighbor search (ANNS) is a fundamental problem in vector databases and AI infrastructures. Recent graph-based ANNS algorithms have achieved high search accuracy with practical efficiency. Despite the advancements, these algorithms still face performance bottlenecks in production, due to the random memory access patterns of graph-based search and the high computational overheads of vector distance. In addition, the performance of a graph-based ANNS algorithm is highly sensitive to parameters, while selecting the optimal parameters is cost-prohibitive, e.g., manual tuning requires repeatedly re-building the index. This paper introduces VSAG, an open-source framework that aims to enhance the in production performance of graph-based ANNS algorithms. VSAG has been deployed at scale in the services of Ant Group, and it incorporates three key optimizations: (i) efficient memory access: it reduces L3 cache misses with pre-fetching and cache-friendly vector organization; (ii) automated parameter tuning: it automatically selects performance-optimal parameters without requiring index rebuilding; (iii) efficient distance computation: it leverages modern hardware, scalar quantization, and smartly switches to low-precision representation to dramatically reduce the distance computation costs. We evaluate VSAG on real-world datasets. The experimental results show that VSAG achieves the state-of-the-art performance and provides up to 4x speedup over HNSWlib (an industry-standard library) while ensuring the same accuracy.