🤖 AI Summary
Existing lossless medical image compression methods suffer from posterior collapse and inefficient utilization of latent variables due to sub-image segmentation, which enforces uniform distribution of latent information. To address this, we propose LVPNet: a prediction-driven, end-to-end latent variable compression framework that directly predicts pixel values and encodes their probability distributions via global latent modeling. We introduce two key innovations: (1) a Global Multi-Scale Perception Module (GMSM) to capture long-range dependencies in the latent space, and (2) a Quantization Compensation Module (QCM) that explicitly learns and reconstructs quantization errors. LVPNet integrates sub-image autoregression, multi-scale feature extraction, and error compensation. Evaluated on multiple authoritative medical imaging datasets, LVPNet achieves state-of-the-art performance—improving PSNR by up to 1.2 dB and average compression ratio by 8.3%, while maintaining real-time inference speed.
📝 Abstract
Autoregressive Initial Bits is a framework that integrates sub-image autoregression and latent variable modeling, demonstrating its advantages in lossless medical image compression. However, in existing methods, the image segmentation process leads to an even distribution of latent variable information across each sub-image, which in turn causes posterior collapse and inefficient utilization of latent variables. To deal with these issues, we propose a prediction-based end-to-end lossless medical image compression method named LVPNet, leveraging global latent variables to predict pixel values and encoding predicted probabilities for lossless compression. Specifically, we introduce the Global Multi-scale Sensing Module (GMSM), which extracts compact and informative latent representations from the entire image, effectively capturing spatial dependencies within the latent space. Furthermore, to mitigate the information loss introduced during quantization, we propose the Quantization Compensation Module (QCM), which learns the distribution of quantization errors and refines the quantized features to compensate for quantization loss. Extensive experiments on challenging benchmarks demonstrate that our method achieves superior compression efficiency compared to state-of-the-art lossless image compression approaches, while maintaining competitive inference speed. The code is at https://github.com/Anonymity00000/Anonymity-repository/.