NeurLZ: An Online Neural Learning-Based Method to Enhance Scientific Lossy Compression

📅 2024-09-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large-scale scientific simulation data face severe storage and I/O bottlenecks; conventional lossless compression struggles to balance compression ratio, fidelity, and heterogeneous data characteristics, while existing deep learning–based compression methods suffer from excessive model size and offline training, lacking online adaptability and efficiency. This paper proposes an online neural-augmented lossy compression framework: (1) a lightweight skip-connected DNN is introduced for real-time, in-situ learning during compression; (2) an error modeling and compensation mechanism recovers fine-grained details lost by traditional compressors; (3) it supports adaptive error control—strict 1× or relaxed 2×—and leverages cross-field correlations for feature fusion. Experiments demonstrate that only five rounds of online learning achieve an 89% bit-rate reduction, culminating in a 94% bit-rate reduction at equivalent distortion—substantially outperforming state-of-the-art scientific data compressors.

Technology Category

Application Category

📝 Abstract
Large-scale scientific simulations generate massive datasets, posing challenges for storage and I/O. Traditional lossy compression struggles to advance more in balancing compression ratio, data quality, and adaptability to diverse scientific data features. While deep learning-based solutions have been explored, their common practice of relying on large models and offline training limits adaptability to dynamic data characteristics and computational efficiency. To address these challenges, we propose NeurLZ, a neural method designed to enhance lossy compression by integrating online learning, cross-field learning, and robust error regulation. Key innovations of NeurLZ include: (1) compression-time online neural learning with lightweight skipping DNN models, adapting to residual errors without costly offline pertaining, (2) the error-mitigating capability, recovering fine details from compression errors overlooked by conventional compressors, (3) $1 imes$ and $2 imes$ error-regulation modes, ensuring strict adherence to $1 imes$ user-input error bounds strictly or relaxed 2$ imes$ bounds for better overall quality, and (4) cross-field learning leveraging inter-field correlations in scientific data to improve conventional methods. Comprehensive evaluations on representative HPC datasets, e.g., Nyx, Miranda, Hurricane, against state-of-the-art compressors show NeurLZ's effectiveness. During the first five learning epochs, NeurLZ achieves an 89% bit rate reduction, with further optimization yielding up to around 94% reduction at equivalent distortion, significantly outperforming existing methods, demonstrating NeurLZ's superior performance in enhancing scientific lossy compression as a scalable and efficient solution.
Problem

Research questions and friction points this paper is trying to address.

Enhance lossy compression for large-scale scientific data
Balance compression ratio, data quality, and adaptability
Overcome limitations of traditional and deep learning methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online neural learning with lightweight DNN models
Error-mitigating capability for fine detail recovery
Cross-field learning leveraging inter-field correlations
🔎 Similar Papers
No similar papers found.