🤖 AI Summary
In MoE model inference, a fundamental trade-off exists between I/O bandwidth constraints and accuracy degradation: token-level sparse routing induces irregular data transfers that exacerbate I/O bottlenecks, while static uniform quantization ignores expert heterogeneity, causing severe accuracy loss under aggressive compression. Method: We propose a routing-guided low-rank compensation mechanism—the first router-aware adaptive precision compensation paradigm—transmitting only the low-rank factors of the top-n experts and dynamically reconstructing critical expert weights conditioned on routing decisions. This is integrated with mixed-precision quantization and GPU/NDP co-offloading. Contribution/Results: Our approach significantly improves throughput in I/O-bound settings, achieves accuracy close to full-precision MoE under identical bandwidth constraints, and establishes a superior bandwidth–accuracy trade-off compared to state-of-the-art methods.
📝 Abstract
Mixture-of-Experts (MoE) models scale capacity via sparse activation but stress memory and bandwidth. Offloading alleviates GPU memory by fetching experts on demand, yet token-level routing causes irregular transfers that make inference I/O-bound. Static uniform quantization reduces traffic but degrades accuracy under aggressive compression by ignoring expert heterogeneity. We present Bandwidth-Efficient Adaptive Mixture-of-Experts via Low-Rank Compensation, which performs router-guided precision restoration using precomputed low-rank compensators. At inference time, our method transfers compact low-rank factors with Top-n (n<k) experts per token and applies compensation to them, keeping others low-bit. Integrated with offloading on GPU and GPU-NDP systems, our method delivers a superior bandwidth-accuracy trade-off and improved throughput.