Optimizing SSD-Resident Graph Indexing for High-Throughput Vector Search

📅 2026-02-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the low CPU utilization and read amplification in disk-based vector approximate nearest neighbor (ANN) search caused by poor access locality in SSD-resident graph indexes. To overcome these limitations, the authors propose a locality-aware co-optimization approach that integrates hierarchical compression, affinity-aware data layout, record-level buffer pooling, and a coroutine-driven asynchronous prefetching mechanism to significantly reduce storage stalls and memory swapping. Coupled with a beam-aware search strategy, the design enhances both I/O efficiency and computational throughput. Experimental results demonstrate that the proposed system achieves up to 5.8× higher throughput and 3.25× lower latency compared to state-of-the-art disk-based ANN systems, while attaining 92% of the throughput of in-memory systems using only 10% of the memory footprint.

Technology Category

Application Category

📝 Abstract
Graph-based approximate nearest neighbor search (ANNS) methods (e.g., HNSW) have become the de facto state of the art for their high precision and low latency. To scale beyond main memory, recent out-of-memory ANNS systems leverage SSDs to store large vector indexes. However, they still suffer from severe CPU underutilization and read amplification (i.e., storage stalls) caused by limited access locality during graph traversal. We present VeloANN, which mitigates storage stalls through a locality-aware data layout and a coroutine-based asynchronous runtime. VeloANN utilizes hierarchical compression and affinity-based data placement scheme to co-locate related vectors within the same page, effectively reducing fragmentation and over-fetching. We further design a record-level buffer pool, where each record groups the neighbors of a vector; by persistently retaining hot records in memory, it eliminates excessive page swapping under constrained memory budgets. To minimize CPU scheduling overheads during disk I/O interruptions, VeloANN employs a coroutine-based asynchronous runtime for lightweight task scheduling. On top of this, it incorporates asynchronous prefetching and a beam-aware search strategy to prioritize cached data, ultimately improving overall search efficiency. Extensive experiments show that VeloANN outperforms state-of-the-art disk-based ANN systems by up to 5.8x in throughput and 3.25x in latency reduction, while achieving 0.92x the throughput of in-memory systems using only 10% of their memory footprint.
Problem

Research questions and friction points this paper is trying to address.

SSD-resident graph indexing
vector search
read amplification
CPU underutilization
access locality
Innovation

Methods, ideas, or system contributions that make the work stand out.

locality-aware data layout
coroutine-based asynchronous runtime
record-level buffer pool
hierarchical compression
beam-aware search
🔎 Similar Papers
No similar papers found.