🤖 AI Summary
Existing arbitrary-scale image super-resolution (ASISR) methods face a fundamental trade-off: lookup-table (LUT)-based approaches support only fixed scaling factors, whereas implicit neural representations incur prohibitive computational and memory overhead. To address this, we propose IM-LUT—a highly efficient, lightweight ASISR framework. First, we design a learnable Interpolation-Mixing Network (IM-Net) that dynamically fuses multiple interpolation kernels based on local image features and the target scale. Second, we distill IM-Net offline into a compact, model-free lookup table (IM-LUT), enabling real-time, CPU-only inference via pure table lookups. To our knowledge, IM-LUT is the first method to integrate adaptive interpolation mixing with LUT-based deployment. Extensive experiments demonstrate that IM-LUT consistently outperforms state-of-the-art ASISR methods across multiple benchmarks, achieving superior quality-efficiency trade-offs and unprecedented practicality for edge devices.
📝 Abstract
Super-resolution (SR) has been a pivotal task in image processing, aimed at enhancing image resolution across various applications. Recently, look-up table (LUT)-based approaches have attracted interest due to their efficiency and performance. However, these methods are typically designed for fixed scale factors, making them unsuitable for arbitrary-scale image SR (ASISR). Existing ASISR techniques often employ implicit neural representations, which come with considerable computational cost and memory demands. To address these limitations, we propose Interpolation Mixing LUT (IM-LUT), a novel framework that operates ASISR by learning to blend multiple interpolation functions to maximize their representational capacity. Specifically, we introduce IM-Net, a network trained to predict mixing weights for interpolation functions based on local image patterns and the target scale factor. To enhance efficiency of interpolation-based methods, IM-Net is transformed into IM-LUT, where LUTs are employed to replace computationally expensive operations, enabling lightweight and fast inference on CPUs while preserving reconstruction quality. Experimental results on several benchmark datasets demonstrate that IM-LUT consistently achieves a superior balance between image quality and efficiency compared to existing methods, highlighting its potential as a promising solution for resource-constrained applications.