ElasticMoE: An Efficient Auto Scaling Method for Mixture-of-Experts Models

📅 2025-10-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Elastic serving of Mixture-of-Experts (MoE) large language models in cloud environments remains challenging due to coarse-grained horizontal scaling (incurring high latency and cost) and vertical scaling (requiring service interruption for restarts). Method: This paper proposes a fine-grained, low-latency, zero-downtime dynamic scaling framework. Its core innovations include: (1) decoupling inference execution from memory operations to enable concurrent scaling and serving; (2) introducing a virtual memory mechanism to support expert migration without cache reallocation; and (3) designing an HBM management module that enables zero-copy remapping of model weights and KV caches, accelerated by P2P high-bandwidth interconnects for rapid accelerator onboarding. Results: Evaluated on Ascend NPUs, the framework reduces scaling latency by up to 9×, doubles throughput during scaling events, and significantly improves SLA compliance.

Technology Category

Application Category

📝 Abstract
Mixture-of-Experts (MoE) models promise efficient scaling of large language models (LLMs) by activating only a small subset of experts per token, but their parallelized inference pipelines make elastic serving challenging. Existing strategies fall short: horizontal scaling provisions entire replicas of the current configuration, often tens to hundreds of accelerators, leading to coarse granularity, long provisioning delays, and costly overprovisioning. Vertical scaling offers finer adjustments but typically requires instance restarts, incurring downtime. These limitations make current approaches ill-suited for the bursty, short-lived traffic patterns common in cloud deployments. We present ElasticMoE, an elastic scaling framework for MoE LLMs that achieves fine-grained, low-latency, and zero-downtime scaling. ElasticMoE decouples inference execution from memory operations, enabling scaling steps to proceed concurrently with serving. An HBM Management Module (HMM) reuses weights and KV caches via zero-copy remapping, while high-bandwidth peer-to-peer transfers bring newly added accelerators online without interrupting service. A virtual memory based expert redistribution mechanism migrates MoE experts without costly buffer reallocations, reducing peak memory usage during expert parallelism reconfiguration. Our evaluation on Ascend NPUs with three popular MoE LLMs shows that ElasticMoE achieves up to 9x lower scale-up latency, up to 2x better throughput during scaling, and significantly improves SLO attainment compared to baselines. By enabling fine-grained, concurrent scaling with minimal disruption, ElasticMoE advances the practicality of deploying massive MoE LLMs in dynamic cloud environments.
Problem

Research questions and friction points this paper is trying to address.

Achieving elastic scaling for Mixture-of-Experts model inference
Reducing coarse granularity and delays in current scaling methods
Enabling zero-downtime scaling for bursty cloud traffic patterns
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decouples inference execution from memory operations
Reuses weights via zero-copy remapping mechanism
Migrates experts using virtual memory redistribution
🔎 Similar Papers
No similar papers found.