LVPNet: A Latent-variable-based Prediction-driven End-to-end Framework for Lossless Compression of Medical Images

📅 2025-06-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing lossless medical image compression methods suffer from posterior collapse and inefficient utilization of latent variables due to sub-image segmentation, which enforces uniform distribution of latent information. To address this, we propose LVPNet: a prediction-driven, end-to-end latent variable compression framework that directly predicts pixel values and encodes their probability distributions via global latent modeling. We introduce two key innovations: (1) a Global Multi-Scale Perception Module (GMSM) to capture long-range dependencies in the latent space, and (2) a Quantization Compensation Module (QCM) that explicitly learns and reconstructs quantization errors. LVPNet integrates sub-image autoregression, multi-scale feature extraction, and error compensation. Evaluated on multiple authoritative medical imaging datasets, LVPNet achieves state-of-the-art performance—improving PSNR by up to 1.2 dB and average compression ratio by 8.3%, while maintaining real-time inference speed.

Technology Category

Application Category

📝 Abstract
Autoregressive Initial Bits is a framework that integrates sub-image autoregression and latent variable modeling, demonstrating its advantages in lossless medical image compression. However, in existing methods, the image segmentation process leads to an even distribution of latent variable information across each sub-image, which in turn causes posterior collapse and inefficient utilization of latent variables. To deal with these issues, we propose a prediction-based end-to-end lossless medical image compression method named LVPNet, leveraging global latent variables to predict pixel values and encoding predicted probabilities for lossless compression. Specifically, we introduce the Global Multi-scale Sensing Module (GMSM), which extracts compact and informative latent representations from the entire image, effectively capturing spatial dependencies within the latent space. Furthermore, to mitigate the information loss introduced during quantization, we propose the Quantization Compensation Module (QCM), which learns the distribution of quantization errors and refines the quantized features to compensate for quantization loss. Extensive experiments on challenging benchmarks demonstrate that our method achieves superior compression efficiency compared to state-of-the-art lossless image compression approaches, while maintaining competitive inference speed. The code is at https://github.com/Anonymity00000/Anonymity-repository/.
Problem

Research questions and friction points this paper is trying to address.

Improves lossless medical image compression efficiency
Addresses posterior collapse in latent variable modeling
Mitigates quantization loss in feature representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses global latent variables for pixel prediction
Introduces Global Multi-scale Sensing Module
Proposes Quantization Compensation Module
🔎 Similar Papers
No similar papers found.
Chenyue Song
Chenyue Song
Harbin Institute of Technology
Chen Hui
Chen Hui
Harbin Institute of Technology & Nanyang Technological University
image compressionquality assessmentmultimedia securityimage and video processing
Q
Qing Lin
Dalian University of Technology, China
W
Wei Zhang
Harbin Institute of Technology, China
S
Siqiao Li
Harbin Institute of Technology, China
H
Haiqi Zhu
Harbin Institute of Technology, China
Shengping Zhang
Shengping Zhang
Professor, Harbin Institute of Technology, China
Computer VisionPattern RecognitionMachine Learning
Zhixuan Li
Zhixuan Li
Research Fellow, CCDS, Nanyang Technological University (Singapore)
Computer VisionScene UnderstandingOcclusion Handling
S
Shaohui Liu
Harbin Institute of Technology, China
F
Feng Jiang
Nanjing University of Information Science and Technology, China
X
Xiang Li
Harbin Institute of Technology, China