🤖 AI Summary
To address low hardware utilization and难以-hiding I/O latency in memory-constrained offloading of Mixture-of-Experts (MoE) inference, this paper proposes a speculative decoding–based GPU-CPU collaborative offloading framework. The method leverages a lightweight draft model to increase expert computation load—marking the first application of speculative decoding to MoE offloading—and designs a CPU-side blocked attention verification kernel to reduce verification overhead. System-level optimization is achieved via theory-driven Roofline analysis and an automated hyperparameter tuner. Experiments demonstrate that, while preserving model accuracy, the approach achieves up to 2.5× higher decoding throughput compared to the state-of-the-art MoE offloading schemes, significantly improving end-to-end hardware utilization and inference efficiency.
📝 Abstract
Recent advancements in Mixture of Experts (MoE) models have significantly increased their parameter scale as well as model performance. Extensive offloading techniques have been proposed to address the GPU memory limitations of MoE inference. However, due to the I/O bottleneck and sparse computation of MoE models, existing offloading techniques still suffer from low hardware utilization. To fully utilize the hardware resources, we propose SpecMoEOff, which employs the speculative decoding technique to enlarge the workload of each expert. SpecMoEOff orchestrates the GPU and CPU by both theoretical and empirical roofline analysis. In addition, we develop a dedicated CPU chunked attention verification kernel to fit the speculative decoding in offloading scenarios as well as minimizing the additional overhead led by draft models. SpecMoEOff further integrates an optimizer to automatically tune the hyperparameters of speculative decoding under given hardware and workload. Experimental results show that SpecMoEOff achieves up to 2.5x decode throughput improvement over the state-of-the-art MoE offloading techniques.