Single-Stage Huffman Encoder for ML Compression

📅 2026-01-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional three-stage Huffman coding incurs significant computational and latency overhead in multi-accelerator communication due to the need for real-time frequency analysis, codebook generation, and transmission, making it ill-suited for low-latency requirements. This work proposes a single-stage Huffman encoder that constructs a fixed codebook based on the average probability distribution derived from historical data batches, eliminating the need for real-time codebook generation and transmission. For the first time, this approach is applied to lossless compression of cross-layer and sharded tensors. Evaluated on the Gemma 2B model, the method achieves a compression ratio only 0.5% lower than per-shard Huffman coding and lies within 1% of the Shannon limit, substantially improving communication efficiency while closely approaching the theoretical compression bound.

Technology Category

Application Category

📝 Abstract
Training and serving Large Language Models (LLMs) require partitioning data across multiple accelerators, where collective operations are frequently bottlenecked by network bandwidth. Lossless compression using Huffman codes is an effective way to alleviate the issue, however, its three-stage design requiring on-the-fly frequency analysis, codebook generation and transmission of codebook along with data introduces computational, latency and data overheads which are prohibitive for latency-sensitive scenarios such as die-to-die communication. This paper proposes a single-stage Huffman encoder that eliminates these overheads by using fixed codebooks derived from the average probability distribution of previous data batches. Through our analysis of the Gemma 2B model, we demonstrate that tensors exhibit high statistical similarity across layers and shards. Using this approach we achieve compression within 0.5% of per-shard Huffman coding and within 1% of the ideal Shannon compressibility, enabling efficient on-the-fly compression.
Problem

Research questions and friction points this paper is trying to address.

Huffman coding
network bandwidth bottleneck
lossless compression
latency-sensitive communication
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

single-stage Huffman encoding
fixed codebook
lossless compression
LLM communication
on-the-fly compression
🔎 Similar Papers
No similar papers found.
Aditya Agrawal
Aditya Agrawal
Software Engineer, Google
Computer ArchitectureDeep LearningPerformance AnalysisML QuantizationML Codesign
A
Albert Magyar
Google LLC
H
Hiteshwar Eswaraiah
Google LLC
P
Patrick Sheridan
Google LLC
P
Pradeep Janedula
Google LLC
R
Ravi Krishnan Venkatesan
Google LLC
K
Krishna Nair
Google LLC
Ravi Iyer
Ravi Iyer
Google