🤖 AI Summary
To address performance bottlenecks in large-scale MPI collective communication caused by large-message transfers, this paper introduces ZCCL—the first error-bounded lossy compression framework specifically designed for collective operations. Its core contributions are: (1) the fZ-light compressor—a ultra-lightweight, high-throughput design ensuring strict, user-controllable error bounds; (2) the first systematic integration of bounded-loss compression into mainstream collectives (e.g., Allgather, Allreduce), enabling end-to-end provable and tunable error guarantees; and (3) a compression-communication co-optimization mechanism tailored to both data-movement and computation-intensive collective primitives. Evaluated on real scientific datasets, ZCCL achieves 1.9×–8.9× speedup over native MPI, significantly reduces communication volume, and rigorously satisfies user-specified absolute or relative error thresholds.
📝 Abstract
With the ever-increasing computing power of supercomputers and the growing scale of scientific applications, the efficiency of MPI collective communication turns out to be a critical bottleneck in large-scale distributed and parallel processing. The large message size in MPI collectives is particularly concerning because it can significantly degrade overall parallel performance. To address this issue, prior research simply applies off-the-shelf fixed-rate lossy compressors in the MPI collectives, leading to suboptimal performance, limited generalizability, and unbounded errors. In this paper, we propose a novel solution, called ZCCL, which leverages error-bounded lossy compression to significantly reduce the message size, resulting in a substantial reduction in communication costs. The key contributions are three-fold. (1) We develop two general, optimized lossy-compression-based frameworks for both types of MPI collectives (collective data movement as well as collective computation), based on their particular characteristics. Our framework not only reduces communication costs but also preserves data accuracy. (2) We customize fZ-light, an ultra-fast error-bounded lossy compressor, to meet the specific needs of collective communication. (3) We integrate ZCCL into multiple collectives, such as Allgather, Allreduce, Scatter, and Broadcast, and perform a comprehensive evaluation based on real-world scientific application datasets. Experiments show that our solution outperforms the original MPI collectives as well as multiple baselines by 1.9--8.9X.