Dual Length Codes for Lossless Compression of BFloat16

📅 2026-02-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the bandwidth bottleneck in collective communication during large language model training and inference, where existing compression methods suffer either from slow decoding and high hardware complexity—such as Huffman coding—or from low compression ratios, as seen in generic encoding schemes. To overcome these limitations, the authors propose a hybrid encoding scheme tailored for BFloat16 tensors. Leveraging symbol frequency analysis from the Gemma model, the method assigns 4-bit short codes to the eight most frequent symbols and 9-bit long codes to the rest, distinguished by a 1-bit prefix. Decoding is accelerated using a compact lookup table with only eight entries. The approach achieves a compression ratio of 18.6% on BFloat16 data—slightly below Huffman’s 21.3%—while significantly improving decoding speed and substantially reducing hardware complexity.

Technology Category

Application Category

📝 Abstract
Training and serving Large Language Models (LLMs) relies heavily on parallelization and collective operations, which are frequently bottlenecked by network bandwidth. Lossless compression using e.g., Huffman codes can alleviate the issue, however, Huffman codes suffer from slow, bit-sequential decoding and high hardware complexity due to deep tree traversals. Universal codes e.g., Exponential-Golomb codes are faster to decode but do not exploit the symbol frequency distributions. To address these limitations, this paper introduces Dual Length Codes, a hybrid approach designed to balance compression efficiency with decoding speed. Analyzing BFloat16 tensors from the Gemma model, we observed that the top 8 most frequent symbols account for approximately 50% of the cumulative probability. These 8 symbols are assigned a short 4 bit code. The remaining 248 symbols are assigned a longer 9 bit code. The coding scheme uses a single prefix bit to distinguish between the two code lengths. The scheme uses a small Look Up Table with only 8 entries for encoding and decoding. The scheme achieves a compressibility of 18.6% in comparison to 21.3% achieved by Huffman codes, but it significantly speeds up the decoding and simplifies the hardware complexity.
Problem

Research questions and friction points this paper is trying to address.

lossless compression
network bandwidth bottleneck
decoding speed
hardware complexity
symbol frequency distribution
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual Length Codes
BFloat16 compression
lossless compression
decoding efficiency
hardware complexity
🔎 Similar Papers
No similar papers found.
Aditya Agrawal
Aditya Agrawal
Software Engineer, Google
Computer ArchitectureDeep LearningPerformance AnalysisML QuantizationML Codesign
A
Albert Magyar
Google LLC
H
Hiteshwar Eswaraiah
Google LLC
P
Patrick Sheridan
Google LLC
P
Pradeep Janedula
Google LLC
R
Ravi Krishnan Venkatesan
Google LLC
K
Krishna Nair
Google LLC
Ravi Iyer
Ravi Iyer
Google