Forward Index Compression for Learned Sparse Retrieval

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes DotVByte, a novel compression algorithm specifically optimized for inner product computation in sparse retrieval systems, addressing the high memory overhead of forward index storage. Building upon and enhancing the StreamVByte integer compression technique, DotVByte significantly reduces memory consumption while preserving retrieval accuracy and maintaining low latency. Experimental results on the MS MARCO dataset demonstrate that DotVByte achieves a superior trade-off among compression ratio, retrieval quality, and computational latency, thereby substantially improving the storage efficiency and practicality of sparse retrieval systems.

Technology Category

Application Category

📝 Abstract
Text retrieval using learned sparse representations of queries and documents has, over the years, evolved into a highly effective approach to search. It is thanks to recent advances in approximate nearest neighbor search-with the emergence of highly efficient algorithms such as the inverted index-based Seismic and the graph-based Hnsw-that retrieval with sparse representations became viable in practice. In this work, we scrutinize the efficiency of sparse retrieval algorithms and focus particularly on the size of a data structure that is common to all algorithmic flavors and that constitutes a substantial fraction of the overall index size: the forward index. In particular, we seek compression techniques to reduce the storage footprint of the forward index without compromising search quality or inner product computation latency. In our examination with various integer compression techniques, we report that StreamVByte achieves the best trade-off between memory footprint, retrieval accuracy, and latency. We then improve StreamVByte by introducing DotVByte, a new algorithm tailored to inner product computation. Experiments on MsMarco show that our improvements lead to significant space savings while maintaining retrieval efficiency.
Problem

Research questions and friction points this paper is trying to address.

forward index
compression
learned sparse retrieval
storage footprint
inner product computation
Innovation

Methods, ideas, or system contributions that make the work stand out.

forward index compression
learned sparse retrieval
StreamVByte
DotVByte
inner product computation
🔎 Similar Papers
No similar papers found.