LEXI: Lossless Exponent Coding for Efficient Inter-Chiplet Communication in Hybrid LLMs

πŸ“… 2026-03-16
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the high inter-chiplet communication latency in large language model (LLM) inference caused by transmitting BF16-formatted data. The authors propose LEXI, a Huffman coding–based lossless exponent compression scheme that exploits, for the first time, the low entropy of the BF16 exponent field to enable real-time compression of activations and cache data, while storing weights in compressed form and decompressing them just-in-time before computation. Implemented in GF 22nm technology with multi-channel LUT decoders integrated into on-chip network router ports, LEXI reduces inter-chiplet communication latency by 33–45% and end-to-end inference latency by 30–35% across hybrid LLMs such as Jamba, Zamba, and Qwen, with only 0.09% area and energy overhead, thereby achieving an optimal balance among throughput, accuracy, and energy efficiency.

Technology Category

Application Category

πŸ“ Abstract
Data movement overheads increase the inference latency of state-of-the-art large language models (LLMs). These models commonly use the bfloat16 (BF16) format for stable training. Floating-point standards allocate eight bits to the exponent, but our profiling reveals that exponent streams exhibit fewer than 3 bits Shannon entropy, indicating high inherent compressibility. To exploit this potential, we propose LEXI, a novel lossless exponent compression scheme based on Huffman coding. LEXI compresses activations and caches on the fly while storing compressed weights for just-in-time decompression near compute, without sacrificing system throughput and model accuracy. The codecs at the ingress and egress ports of network-on-chip routers sustain the maximum link bandwidth via multi-lane LUT decoders, incurring only 0.09 percent area and energy overheads with GF 22 nm technology. LEXI reduces inter-chiplet communication and end-to-end inference latencies by 33-45 percent and 30-35 percent on modern Jamba, Zamba, and Qwen LLMs implemented on a homogeneous chiplet architecture.
Problem

Research questions and friction points this paper is trying to address.

inter-chiplet communication
large language models
exponent compression
inference latency
data movement overhead
Innovation

Methods, ideas, or system contributions that make the work stand out.

lossless exponent compression
Huffman coding
inter-chiplet communication
on-the-fly compression
chiplet-based LLMs
πŸ”Ž Similar Papers
No similar papers found.
Miao Sun
Miao Sun
WeRide
Computer VisionAutonomous Driving
Alish Kanani
Alish Kanani
University of Wisconsin-Madison
ChipletsThermal managementPerformance ModelingTask SchedulingApproximate Circuits
K
Kaushik Shroff
Department of Electrical and Computer Engineering, University of Wisconsin-Madison
U
Umit Ogras
Department of Electrical and Computer Engineering, University of Wisconsin-Madison