🤖 AI Summary
To address the prohibitively large lookup table (LUT) overhead in high-rate nested lattice quantization—where LUT size scales exponentially with rate as $2^{2dR}$ and is tightly coupled to the quantization rate—this paper proposes a hierarchical nested lattice quantization framework. The $d$-dimensional vector quantization is decomposed into $M$ layers, each employing a nested lattice structure combined with product codes. This reduces the per-layer LUT size to $2^{2dR/M}$, effectively decoupling LUT complexity from the overall rate. To our knowledge, this is the first scheme to break the linear dependence between LUT size and quantization rate while preserving asymptotically negligible distortion penalty. Numerical experiments confirm that the method achieves near-optimal quantization accuracy at high rates, compressing the total LUT size by a factor of $M$-th root relative to conventional approaches. The resulting reduction significantly enhances hardware efficiency and deployment feasibility.
📝 Abstract
Recent work have shown that the quantization for matrix multiplication problem can be optimally solved by quantizing each column in each matrix using a nested lattice code, and then multiplying the de-quantized matrices. It was further demonstrated that when product codes of sub-dimension $d$ and rate $R$ are used, the de-quantization and inner product operations can be implemented with querying a lookup table (LUT) of size $2^{2dR}$, but this is only useful when $dR$ is sufficiently small. This in turn limits LUT-based inner product decoding to low-rate quantizers. In this work, we develop a rate $R$ hierarchical nested lattice quantization framework, which quantizes each vector to $M$ layers, and admits LUT-based inner product decoding using an LUT of size $2^{2dfrac{R}{M}}$, allowing for high-rate quantization. We provide analytic bounds on the loss of the developed scheme compared to standard nested lattice quantizers, and also numerically illustrate that this loss is negligible. Thus, our scheme enables to use small LUTs without compromising the overall distortion.