🤖 AI Summary
This work addresses the surge in expert activation caused by request batching and speculative decoding in Mixture-of-Experts (MoE) models during production inference, which undermines their computational efficiency. To mitigate this issue, the authors propose XShare, the first collaborative expert-sharing mechanism tailored for heterogeneous request batches. XShare formulates batch-aware expert selection as a modular optimization problem and introduces a greedy algorithm that requires no model retraining. By integrating intra-batch expert sharing with a hierarchical relevance-aware strategy, XShare dynamically optimizes expert assignments per batch to maximize gating scores. Experiments demonstrate that XShare reduces expert activations by 30% under standard batching, lowers peak GPU memory usage by up to 3× in expert-parallel deployments, and improves throughput by 14% in speculative decoding scenarios.
📝 Abstract
Mixture-of-Experts (MoE) architectures are increasingly used to efficiently scale large language models. However, in production inference, request batching and speculative decoding significantly amplify expert activation, eroding these efficiency benefits. We address this issue by modeling batch-aware expert selection as a modular optimization problem and designing efficient greedy algorithms for different deployment settings. The proposed method, namely XShare, requires no retraining and dynamically adapts to each batch by maximizing the total gating score of selected experts. It reduces expert activation by up to 30% under standard batching, cuts peak GPU load by up to 3x in expert-parallel deployments, and achieves up to 14% throughput gains in speculative decoding via hierarchical, correlation-aware expert selection even if requests in a batch drawn from heterogeneous datasets.