PacTrain: Pruning and Adaptive Sparse Gradient Compression for Efficient Collective Communication in Distributed Deep Learning

📅 2025-05-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In distributed deep learning, high communication overhead during gradient aggregation and the difficulty of simultaneously achieving acceleration and model accuracy with existing compression methods pose significant challenges. To address these issues, this paper proposes a pruning-driven collaborative sparsity optimization framework. The framework tightly integrates structured model pruning with adaptive sparse gradient compression, enabling, for the first time, globally consistent sparsity modeling and an all-reduce–compatible, lossless compression protocol. It further incorporates dynamic sparsity selection, sparsity-aware collective communication, and efficient encoding to substantially reduce bandwidth requirements. Evaluated on vision and language model training tasks, the framework achieves 1.25×–8.72× throughput improvement over baseline distributed training, with zero accuracy degradation. It consistently outperforms state-of-the-art gradient compression schemes across all metrics.

Technology Category

Application Category

📝 Abstract
Large-scale deep neural networks (DNN) exhibit excellent performance for various tasks. As DNNs and datasets grow, distributed training becomes extremely time-consuming and demands larger clusters. A main bottleneck is the resulting gradient aggregation overhead. While gradient compression and sparse collective communication techniques are commonly employed to alleviate network load, many gradient compression schemes do not achieve acceleration of the training process while also preserving accuracy. This paper introduces PacTrain, a novel framework that accelerates distributed training by combining pruning with sparse gradient compression. Active pruning of the neural network makes the model weights and gradients sparse. By ensuring the global knowledge of the gradient sparsity among all distributed training workers, we can perform lightweight compression communication without harming accuracy. We show that the PacTrain compression scheme achieves a near-optimal compression strategy while remaining compatible with the all-reduce primitive. Experimental evaluations show that PacTrain improves training throughput by 1.25 to 8.72 times compared to state-of-the-art compression-enabled systems for representative vision and language models training tasks under bandwidth-constrained conditions.
Problem

Research questions and friction points this paper is trying to address.

Reduces gradient aggregation overhead in distributed DNN training
Combines pruning and sparse compression to maintain accuracy
Improves training throughput under bandwidth-constrained conditions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines pruning with sparse gradient compression
Ensures global knowledge of gradient sparsity
Achieves near-optimal compression with all-reduce
🔎 Similar Papers
No similar papers found.
Y
Yisu Wang
Hong Kong University of Science and Technology (Guangzhou)
R
Ruilong Wu
Hong Kong University of Science and Technology (Guangzhou)
X
Xinjiao Li
Hong Kong University of Science and Technology (Guangzhou)
Dirk Kutscher
Dirk Kutscher
The Hong Kong University of Science and Technology (Guangzhou)
computer networks