🤖 AI Summary
This work addresses the high computational cost and limited parallelism often associated with existing learning-based image compression methods, despite their improved rate-distortion performance. The authors propose ARCHE, a novel framework that unifies hyperpriors, spatial autoregression, and channel excitation within an efficient convolutional architecture—modeling both global and local dependencies in latent variables without recurrent or Transformer components. Through adaptive feature recalibration and residual refinement, ARCHE significantly enhances representation quality. The model is end-to-end trainable and achieves substantial BD-Rate reductions of 48%, 30%, and 5% over Balle et al., Minnen & Singh, and VVC Intra, respectively, on the Kodak dataset. With only 95 million parameters, it encodes a single image in 222 ms while delivering visibly superior reconstruction quality compared to current state-of-the-art approaches.
📝 Abstract
Recent progress in learning-based image compression has demonstrated that end-to-end optimization can substantially outperform traditional codecs by jointly learning compact latent representations and probabilistic entropy models. However, many existing approaches achieve high rate-distortion efficiency at the expense of increased computational cost and limited parallelism. This paper presents ARCHE - Autoregressive Residual Compression with Hyperprior and Excitation, an end-to-end learned image compression framework that balances modeling accuracy and computational efficiency. The proposed architecture unifies hierarchical, spatial, and channel-based priors within a single probabilistic framework, capturing both global and local dependencies in the latent representation of the image, while employing adaptive feature recalibration and residual refinement to enhance latent representation quality. Without relying on recurrent or transformer-based components, ARCHE attains state-of-the-art rate-distortion efficiency: it reduces the BD-Rate by approximately 48% relative to the commonly used benchmark model of Balle et al., 30% relative to the channel-wise autoregressive model of Minnen&Singh and 5% against the VVC Intra codec on the Kodak benchmark dataset. The framework maintains computational efficiency with 95M parameters and 222ms running time per image. Visual comparisons confirm sharper textures and improved color fidelity, particularly at lower bit rates, demonstrating that accurate entropy modeling can be achieved through efficient convolutional designs suitable for practical deployment.