A High-Throughput GPU Framework for Adaptive Lossless Compression of Floating-Point Data

📅 2025-11-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high storage costs and stringent lossless fidelity requirements posed by the surge of floating-point data in IoT and high-performance computing, this paper proposes the first GPU-accelerated adaptive lossless floating-point compression framework. Our method introduces three key innovations: (1) a lightweight asynchronous pipelined architecture that implicitly overlaps CPU–GPU data transfers to hide communication overhead; (2) a theoretically guaranteed error-free floating-point-to-integer transformation algorithm; and (3) adaptive sparse bit-plane encoding, robustly handling data sparsity induced by outliers. Evaluated on 12 real-world datasets, our framework achieves an average compression ratio of 0.299—9.1% better than the best prior baseline. Compression and decompression throughput reach 10.82 GB/s and 12.32 GB/s, respectively—both 2.4× higher than the state-of-the-art methods.

Technology Category

Application Category

📝 Abstract
The torrential influx of floating-point data from domains like IoT and HPC necessitates high-performance lossless compression to mitigate storage costs while preserving absolute data fidelity. Leveraging GPU parallelism for this task presents significant challenges, including bottlenecks in heterogeneous data movement, complexities in executing precision-preserving conversions, and performance degradation due to anomaly-induced sparsity. To address these challenges, this paper introduces a novel GPU-based framework for floating-point adaptive lossless compression. The proposed solution employs three key innovations: a lightweight asynchronous pipeline that effectively hides I/O latency during CPU-GPU data transfer; a fast and theoretically guaranteed float-to-integer conversion method that eliminates errors inherent in floating-point arithmetic; and an adaptive sparse bit-plane encoding strategy that mitigates the sparsity caused by outliers. Extensive experiments on 12 diverse datasets demonstrate that the proposed framework significantly outperforms state-of-the-art competitors, achieving an average compression ratio of 0.299 (a 9.1% relative improvement over the best competitor), an average compression throughput of 10.82 GB/s (2.4x higher), and an average decompression throughput of 12.32 GB/s (2.4x higher).
Problem

Research questions and friction points this paper is trying to address.

Developing GPU-accelerated lossless compression for floating-point data
Addressing bottlenecks in CPU-GPU data transfer and conversion
Overcoming performance degradation from sparse anomalous data patterns
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight asynchronous pipeline hides I/O latency
Fast error-free float-to-integer conversion method
Adaptive sparse bit-plane encoding handles outliers
🔎 Similar Papers
No similar papers found.
Z
Zheng Li
Chongqing University, China
Weiyan Wang
Weiyan Wang
Tencent
Machine Learning SystemHigh Performance Computing
R
Ruiyuan Li
Chongqing University, China
C
Chao Chen
Chongqing University, China
X
Xianlei Long
Chongqing University, China
L
Linjiang Zheng
Chongqing University, China
Quanqing Xu
Quanqing Xu
Ant Group
Cloud ComputingCloud StorageLarge-scale Hybrid Storage Systems
C
Chuanhui Yang
OceanBase, Ant Group, China