EDGC: Entropy-driven Dynamic Gradient Compression for Efficient LLM Training

📅 2025-11-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In distributed large language model (LLM) training, static gradient compression overlooks the dynamic nature of gradients, compromising the trade-off between communication efficiency and model accuracy. To address this, we propose a dynamic compression framework grounded in gradient entropy. We innovatively adopt gradient entropy as a proxy metric to quantify information uncertainty in gradients, theoretically establish an entropy–compression-rate relationship, and design a windowed adaptive mechanism for real-time, cross-pipeline-stage compression-rate optimization. Gradient entropy is efficiently estimated via downsampling and seamlessly integrated into mainstream distributed training architectures. Experiments on multi-GPU clusters demonstrate that our method reduces communication latency by up to 46.45% and end-to-end training time by up to 16.13%, while strictly preserving model accuracy.

Technology Category

Application Category

📝 Abstract
Training large language models (LLMs) poses significant challenges regarding computational resources and memory capacity. Although distributed training techniques help mitigate these issues, they still suffer from considerable communication overhead. Existing approaches primarily rely on static gradient compression to enhance communication efficiency; however, these methods neglect the dynamic nature of evolving gradients during training, leading to performance degradation. Accelerating LLM training via compression without sacrificing performance remains a challenge. In this paper, we propose an entropy-driven dynamic gradient compression framework called EDGC. The core concept is to adjust the compression rate during LLM training based on the evolving trends of gradient entropy, taking into account both compression efficiency and error. EDGC consists of three key components.First, it employs a down-sampling method to efficiently estimate gradient entropy, reducing computation overhead. Second, it establishes a theoretical model linking compression rate with gradient entropy, enabling more informed compression decisions. Lastly, a window-based adjustment mechanism dynamically adapts the compression rate across pipeline stages, improving communication efficiency and maintaining model performance. We implemented EDGC on a 32-NVIDIA-V100 cluster and a 64-NVIDIA-H100 cluster to train GPT2-2.5B and GPT2-12.1B, respectively. The results show that EDGC significantly reduces communication latency and training time by up to 46.45% and 16.13% while preserving LLM accuracy.
Problem

Research questions and friction points this paper is trying to address.

Reducing communication overhead in distributed LLM training
Addressing performance degradation from static gradient compression
Dynamically adjusting compression rates based on gradient entropy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic gradient compression adjusts rate by entropy
Down-sampling estimates gradient entropy with low overhead
Window-based mechanism adapts compression across pipeline stages
🔎 Similar Papers
No similar papers found.
Q
Qingao Yi
University of Shanghai for Science and Technology, Shanghai, China
J
Jiaang Duan
Shanghai Jiao Tong University, Shanghai, China
H
Hanwen Hu
Shanghai Jiao Tong University, Shanghai, China
Q
Qin Hua
Shanghai Jiao Tong University, Shanghai, China
Haiyan Zhao
Haiyan Zhao
Peking University
Shiyou Qian
Shiyou Qian
Shanghai Jiao Tong University
Computer Science
Dingyu Yang
Dingyu Yang
Zhejiang University
DatabasePerformance EvaluationDistributed Processing
J
Jian Cao
Shanghai Jiao Tong University, Shanghai, China
J
Jinghua Tang
Shanghai Jiao Tong University, Shanghai, China
Yinghao Yu
Yinghao Yu
Engineer, Alibaba
Resource management in containerized clustersGeneration optimizations for distributed systems
C
Chenzhi Liao
Alibaba Group, Hangzhou, China
K
Kangjin Wang
Alibaba Group, Hangzhou, China
L
Liping Zhang
Alibaba Group, Hangzhou, China