🤖 AI Summary
This work addresses the challenges faced by existing vector retrieval systems in large-scale real-time scenarios, where low throughput, difficulty in dynamic updates, and GPU memory constraints hinder the simultaneous achievement of high accuracy and low latency. To overcome these limitations, we propose a CPU-GPU-disk协同 real-time vector retrieval framework that integrates a hierarchical indexing structure, workload-aware caching, CUDA multi-stream optimization, concurrency control, and adaptive resource scheduling to enable efficient online updates and low-latency queries. Experimental results across diverse streaming workloads demonstrate that our system achieves an average 20.9× throughput improvement and reduces query latency by 1.3–50.7× while maintaining high recall, thereby significantly breaking through the performance bottlenecks of dynamic large-scale vector retrieval.
📝 Abstract
Approximate Nearest Neighbor Search (ANNS) underpins modern applications such as information retrieval and recommendation. With the rapid growth of vector data, efficient indexing for real-time vector search has become rudimentary. Existing CPU-based solutions support updates but suffer from low throughput, while GPU-accelerated systems deliver high performance but face challenges with dynamic updates and limited GPU memory, resulting in a critical performance gap for continuous, large-scale vector search requiring both accuracy and speed. In this paper, we present SVFusion, a GPU-CPU-disk collaborative framework for real-time vector search that bridges sophisticated GPU computation with online updates. SVFusion leverages a hierarchical vector index architecture that employs CPU-GPU co-processing, along with a workload-aware vector caching mechanism to maximize the efficiency of limited GPU memory. It further enhances performance through real-time coordination with CUDA multi-stream optimization and adaptive resource management, along with concurrency control that ensures data consistency under interleaved queries and updates. Empirical results demonstrate that SVFusion achieves significant improvements in query latency and throughput, exhibiting a 20.9x higher throughput on average and 1.3x to 50.7x lower latency compared to baseline methods, while maintaining high recall for large-scale datasets under various streaming workloads.