Qinco2: Vector Compression and Search with Improved Implicit Neural Codebooks

📅 2025-01-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited rate-distortion performance and degraded nearest-neighbor search accuracy in multi-codebook quantization—caused by insufficient modeling of inter-codebook dependencies—this paper proposes a neural implicit residual quantization framework. Our method introduces three key innovations: (1) residual-based codeword pre-screening and beam-search encoding to enhance encoding efficiency; (2) construction of codeword pairs to generate a compact, fast approximate decoding candidate list, significantly reducing decoding overhead; and (3) end-to-end joint optimization of implicit neural codebooks, multi-stage residual quantization, and encoding strategies. Evaluated on standard benchmarks, our approach achieves a 34% reduction in reconstruction MSE under 16-byte compression on BigANN, and improves nearest-neighbor search accuracy by 24% using only 8-byte encodings on Deep1M—establishing new state-of-the-art performance on both metrics.

Technology Category

Application Category

📝 Abstract
Vector quantization is a fundamental technique for compression and large-scale nearest neighbor search. For high-accuracy operating points, multi-codebook quantization associates data vectors with one element from each of multiple codebooks. An example is residual quantization (RQ), which iteratively quantizes the residual error of previous steps. Dependencies between the different parts of the code are, however, ignored in RQ, which leads to suboptimal rate-distortion performance. QINCo recently addressed this inefficiency by using a neural network to determine the quantization codebook in RQ based on the vector reconstruction from previous steps. In this paper we introduce QINCo2 which extends and improves QINCo with (i) improved vector encoding using codeword pre-selection and beam-search, (ii) a fast approximate decoder leveraging codeword pairs to establish accurate short-lists for search, and (iii) an optimized training procedure and network architecture. We conduct experiments on four datasets to evaluate QINCo2 for vector compression and billion-scale nearest neighbor search. We obtain outstanding results in both settings, improving the state-of-the-art reconstruction MSE by 34% for 16-byte vector compression on BigANN, and search accuracy by 24% with 8-byte encodings on Deep1M.
Problem

Research questions and friction points this paper is trying to address.

Residual Quantization
Data Compression
Similarity Search
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vector Quantization
Data Compression Efficiency
Similar Vector Search Accuracy
🔎 Similar Papers
No similar papers found.