🤖 AI Summary
This work proposes a hardware-friendly grayscale image compression method that addresses the high computational cost of existing neural image codecs, which hinders their deployment on low-power edge devices. By introducing differentiable logic circuits into image compression for the first time, the approach enables end-to-end training of lookup tables, effectively combining the representational power of neural networks with the energy efficiency of Boolean operations. Evaluated on standard grayscale image datasets, the method outperforms conventional codecs in both reconstruction fidelity and computational efficiency, achieving significantly lower energy consumption and latency. This study thus opens a new pathway toward practical deployment of learned image compression algorithms on resource-constrained edge hardware.
📝 Abstract
Neural image codecs achieve higher compression ratios than traditional hand-crafted methods such as PNG or JPEG-XL, but often incur substantial computational overhead, limiting their deployment on energy-constrained devices such as smartphones, cameras, and drones. We propose Grayscale Image Compression with Differentiable Logic Circuits (GIC-DLC), a hardware-aware codec where we train lookup tables to combine the flexibility of neural networks with the efficiency of Boolean operations. Experiments on grayscale benchmark datasets show that GIC-DLC outperforms traditional codecs in compression efficiency while allowing substantial reductions in energy consumption and latency. These results demonstrate that learned compression can be hardware-friendly, offering a promising direction for low-power image compression on edge devices.