Bandwidth-Efficient Adaptive Mixture-of-Experts via Low-Rank Compensation

📅 2025-12-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In MoE model inference, a fundamental trade-off exists between I/O bandwidth constraints and accuracy degradation: token-level sparse routing induces irregular data transfers that exacerbate I/O bottlenecks, while static uniform quantization ignores expert heterogeneity, causing severe accuracy loss under aggressive compression. Method: We propose a routing-guided low-rank compensation mechanism—the first router-aware adaptive precision compensation paradigm—transmitting only the low-rank factors of the top-n experts and dynamically reconstructing critical expert weights conditioned on routing decisions. This is integrated with mixed-precision quantization and GPU/NDP co-offloading. Contribution/Results: Our approach significantly improves throughput in I/O-bound settings, achieves accuracy close to full-precision MoE under identical bandwidth constraints, and establishes a superior bandwidth–accuracy trade-off compared to state-of-the-art methods.

Technology Category

Application Category

📝 Abstract
Mixture-of-Experts (MoE) models scale capacity via sparse activation but stress memory and bandwidth. Offloading alleviates GPU memory by fetching experts on demand, yet token-level routing causes irregular transfers that make inference I/O-bound. Static uniform quantization reduces traffic but degrades accuracy under aggressive compression by ignoring expert heterogeneity. We present Bandwidth-Efficient Adaptive Mixture-of-Experts via Low-Rank Compensation, which performs router-guided precision restoration using precomputed low-rank compensators. At inference time, our method transfers compact low-rank factors with Top-n (n<k) experts per token and applies compensation to them, keeping others low-bit. Integrated with offloading on GPU and GPU-NDP systems, our method delivers a superior bandwidth-accuracy trade-off and improved throughput.
Problem

Research questions and friction points this paper is trying to address.

Reduces bandwidth stress in sparse MoE models
Addresses irregular I/O from token-level expert fetching
Improves accuracy under aggressive expert quantization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Router-guided precision restoration with low-rank compensators
Transfers compact low-rank factors for Top-n experts per token
Integrates offloading on GPU and GPU-NDP systems for efficiency
🔎 Similar Papers
No similar papers found.
Z
Zhenyu Liu
Rensselaer Polytechnic Institute, Troy, NY, USA
Y
Yunzhen Liu
University of Massachusetts Amherst, Amherst, MA, USA
Z
Zehao Fan
Rensselaer Polytechnic Institute, Troy, NY, USA
G
Garrett Gagnon
Rensselaer Polytechnic Institute, Troy, NY, USA
Y
Yayue Hou
Rensselaer Polytechnic Institute, Troy, NY, USA
N
Nan Wu
George Washington University, Washington, DC, USA
Yangwook Kang
Yangwook Kang
Samsung Semiconductor Inc.
Storage SystemsNVRAM DevicesOperating Systems
L
Liu Liu
Rensselaer Polytechnic Institute, Troy, NY, USA