IQ-LUT: interpolated and quantized LUT for efficient image super-resolution

📅 2026-04-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the exponential growth of index space in traditional lookup table (LUT)-based image super-resolution methods, which becomes prohibitive under increasing receptive fields and bit depths, especially for deployment on resource-constrained devices. To overcome this limitation, the authors propose an efficient convolutional neural network (ECNN) framework featuring single-input multi-output architecture that synergistically integrates interpolation, non-uniform quantization, and residual learning. Furthermore, knowledge distillation is introduced to guide the optimization of quantization levels. This approach represents the first effort to cohesively combine these mechanisms within an LUT-based super-resolution paradigm, substantially compressing the index space while reducing reliance on high bit depth. Experimental results demonstrate up to a 50-fold reduction in storage overhead compared to the original ECNN, accompanied by improved reconstruction quality.
📝 Abstract
Lookup table (LUT) methods demonstrate considerable potential in accelerating image super-resolution inference. However, pursuing higher image quality through larger receptive fields and bit-depth triggers exponential growth in the LUT's index space, creating a storage bottleneck that limits deployment on resource-constrained devices. We introduce IQ-LUT, which achieves a reduction in LUT size while simultaneously enhancing super-resolution quality. First, we integrate interpolation and quantization into the single-input, multiple-output ECNN, which dramatically reduces the index space and thereby the overall LUT size. Second, the integration of residual learning mitigates the dependence on LUT bit-depth, which facilitates training stability and prioritizes the reconstruction of fine-grained details for superior visual quality. Finally, guided by knowledge distillation, our non-uniform quantization process optimizes the quantization levels, thereby reducing storage while also compensating for quantization loss. Extensive benchmarking demonstrates our approach substantially reduces storage costs (by up to 50x compared to ECNN) while achieving superior super-resolution quality.
Problem

Research questions and friction points this paper is trying to address.

lookup table
image super-resolution
storage bottleneck
bit-depth
receptive field
Innovation

Methods, ideas, or system contributions that make the work stand out.

interpolated LUT
quantized LUT
image super-resolution
knowledge distillation
residual learning
🔎 Similar Papers
No similar papers found.