🤖 AI Summary
Traditional three-stage Huffman coding incurs significant computational and latency overhead in multi-accelerator communication due to the need for real-time frequency analysis, codebook generation, and transmission, making it ill-suited for low-latency requirements. This work proposes a single-stage Huffman encoder that constructs a fixed codebook based on the average probability distribution derived from historical data batches, eliminating the need for real-time codebook generation and transmission. For the first time, this approach is applied to lossless compression of cross-layer and sharded tensors. Evaluated on the Gemma 2B model, the method achieves a compression ratio only 0.5% lower than per-shard Huffman coding and lies within 1% of the Shannon limit, substantially improving communication efficiency while closely approaching the theoretical compression bound.
📝 Abstract
Training and serving Large Language Models (LLMs) require partitioning data across multiple accelerators, where collective operations are frequently bottlenecked by network bandwidth. Lossless compression using Huffman codes is an effective way to alleviate the issue, however, its three-stage design requiring on-the-fly frequency analysis, codebook generation and transmission of codebook along with data introduces computational, latency and data overheads which are prohibitive for latency-sensitive scenarios such as die-to-die communication. This paper proposes a single-stage Huffman encoder that eliminates these overheads by using fixed codebooks derived from the average probability distribution of previous data batches. Through our analysis of the Gemma 2B model, we demonstrate that tensors exhibit high statistical similarity across layers and shards. Using this approach we achieve compression within 0.5% of per-shard Huffman coding and within 1% of the ideal Shannon compressibility, enabling efficient on-the-fly compression.