π€ AI Summary
This work addresses the high inter-chiplet communication latency in large language model (LLM) inference caused by transmitting BF16-formatted data. The authors propose LEXI, a Huffman codingβbased lossless exponent compression scheme that exploits, for the first time, the low entropy of the BF16 exponent field to enable real-time compression of activations and cache data, while storing weights in compressed form and decompressing them just-in-time before computation. Implemented in GF 22nm technology with multi-channel LUT decoders integrated into on-chip network router ports, LEXI reduces inter-chiplet communication latency by 33β45% and end-to-end inference latency by 30β35% across hybrid LLMs such as Jamba, Zamba, and Qwen, with only 0.09% area and energy overhead, thereby achieving an optimal balance among throughput, accuracy, and energy efficiency.
π Abstract
Data movement overheads increase the inference latency of state-of-the-art large language models (LLMs). These models commonly use the bfloat16 (BF16) format for stable training. Floating-point standards allocate eight bits to the exponent, but our profiling reveals that exponent streams exhibit fewer than 3 bits Shannon entropy, indicating high inherent compressibility. To exploit this potential, we propose LEXI, a novel lossless exponent compression scheme based on Huffman coding. LEXI compresses activations and caches on the fly while storing compressed weights for just-in-time decompression near compute, without sacrificing system throughput and model accuracy. The codecs at the ingress and egress ports of network-on-chip routers sustain the maximum link bandwidth via multi-lane LUT decoders, incurring only 0.09 percent area and energy overheads with GF 22 nm technology. LEXI reduces inter-chiplet communication and end-to-end inference latencies by 33-45 percent and 30-35 percent on modern Jamba, Zamba, and Qwen LLMs implemented on a homogeneous chiplet architecture.