🤖 AI Summary
Elastic serving of Mixture-of-Experts (MoE) large language models in cloud environments remains challenging due to coarse-grained horizontal scaling (incurring high latency and cost) and vertical scaling (requiring service interruption for restarts).
Method: This paper proposes a fine-grained, low-latency, zero-downtime dynamic scaling framework. Its core innovations include: (1) decoupling inference execution from memory operations to enable concurrent scaling and serving; (2) introducing a virtual memory mechanism to support expert migration without cache reallocation; and (3) designing an HBM management module that enables zero-copy remapping of model weights and KV caches, accelerated by P2P high-bandwidth interconnects for rapid accelerator onboarding.
Results: Evaluated on Ascend NPUs, the framework reduces scaling latency by up to 9×, doubles throughput during scaling events, and significantly improves SLA compliance.
📝 Abstract
Mixture-of-Experts (MoE) models promise efficient scaling of large language models (LLMs) by activating only a small subset of experts per token, but their parallelized inference pipelines make elastic serving challenging. Existing strategies fall short: horizontal scaling provisions entire replicas of the current configuration, often tens to hundreds of accelerators, leading to coarse granularity, long provisioning delays, and costly overprovisioning. Vertical scaling offers finer adjustments but typically requires instance restarts, incurring downtime. These limitations make current approaches ill-suited for the bursty, short-lived traffic patterns common in cloud deployments.
We present ElasticMoE, an elastic scaling framework for MoE LLMs that achieves fine-grained, low-latency, and zero-downtime scaling. ElasticMoE decouples inference execution from memory operations, enabling scaling steps to proceed concurrently with serving. An HBM Management Module (HMM) reuses weights and KV caches via zero-copy remapping, while high-bandwidth peer-to-peer transfers bring newly added accelerators online without interrupting service. A virtual memory based expert redistribution mechanism migrates MoE experts without costly buffer reallocations, reducing peak memory usage during expert parallelism reconfiguration.
Our evaluation on Ascend NPUs with three popular MoE LLMs shows that ElasticMoE achieves up to 9x lower scale-up latency, up to 2x better throughput during scaling, and significantly improves SLO attainment compared to baselines. By enabling fine-grained, concurrent scaling with minimal disruption, ElasticMoE advances the practicality of deploying massive MoE LLMs in dynamic cloud environments.