🤖 AI Summary
In distributed large language model (LLM) training, static gradient compression overlooks the dynamic nature of gradients, compromising the trade-off between communication efficiency and model accuracy. To address this, we propose a dynamic compression framework grounded in gradient entropy. We innovatively adopt gradient entropy as a proxy metric to quantify information uncertainty in gradients, theoretically establish an entropy–compression-rate relationship, and design a windowed adaptive mechanism for real-time, cross-pipeline-stage compression-rate optimization. Gradient entropy is efficiently estimated via downsampling and seamlessly integrated into mainstream distributed training architectures. Experiments on multi-GPU clusters demonstrate that our method reduces communication latency by up to 46.45% and end-to-end training time by up to 16.13%, while strictly preserving model accuracy.
📝 Abstract
Training large language models (LLMs) poses significant challenges regarding computational resources and memory capacity. Although distributed training techniques help mitigate these issues, they still suffer from considerable communication overhead. Existing approaches primarily rely on static gradient compression to enhance communication efficiency; however, these methods neglect the dynamic nature of evolving gradients during training, leading to performance degradation. Accelerating LLM training via compression without sacrificing performance remains a challenge. In this paper, we propose an entropy-driven dynamic gradient compression framework called EDGC. The core concept is to adjust the compression rate during LLM training based on the evolving trends of gradient entropy, taking into account both compression efficiency and error. EDGC consists of three key components.First, it employs a down-sampling method to efficiently estimate gradient entropy, reducing computation overhead. Second, it establishes a theoretical model linking compression rate with gradient entropy, enabling more informed compression decisions. Lastly, a window-based adjustment mechanism dynamically adapts the compression rate across pipeline stages, improving communication efficiency and maintaining model performance. We implemented EDGC on a 32-NVIDIA-V100 cluster and a 64-NVIDIA-H100 cluster to train GPT2-2.5B and GPT2-12.1B, respectively. The results show that EDGC significantly reduces communication latency and training time by up to 46.45% and 16.13% while preserving LLM accuracy.