xDeepServe: Model-as-a-Service on Huawei CloudMatrix384

📅 2025-08-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address challenges in deploying large-scale Mixture-of-Experts (MoE) models on Huawei’s CloudMatrix384 SuperPod—including rigid execution models, non-scalable scheduling, imbalanced expert load, and single-point failures—this paper proposes an LLM serving system tailored for ultra-large-scale AI infrastructure. Methodologically, it introduces: (1) a Transformerless decoupled architecture that decomposes the Transformer into independently schedulable units; (2) XCCL, a communication library leveraging global shared memory to optimize cross-NPU data exchange; and (3) dual decoupling of prefill-decode and MoE-attention, tightly integrated with the FlowServe inference engine and high-speed interconnect. The system achieves elastic scalability across hundreds of NPUs, improves expert load balancing by 42%, eliminates centralized scheduler bottlenecks, and significantly enhances throughput and resource utilization—all while maintaining low latency.

Technology Category

Application Category

📝 Abstract
The rise of scaled-out LLMs and scaled-up SuperPods signals a new era in large-scale AI infrastructure. LLMs continue to scale out via MoE, as seen in recent models like DeepSeek, Kimi, and Qwen. In parallel, AI hardware is scaling up, with Huawei's CloudMatrix384 SuperPod offering hundreds of GB/s high-speed interconnects. Running large MoE models on SuperPod-scale hardware brings new challenges. It requires new execution models, scalable scheduling, efficient expert load balancing, and elimination of single points of failure. This paper presents xDeepServe, Huawei Cloud's LLM serving system designed for SuperPod-scale infrastructure. At its core is Transformerless, a disaggregated architecture that decomposes transformer models into modular units--attention, feedforward, and MoE--executed independently on NPUs connected via high-speed fabric. We implement this design in two forms: disaggregated prefill-decode and disaggregated MoE-attention. This fully disaggregated setup enables independent scaling of compute and memory without sacrificing performance. To support this architecture, we propose XCCL, a communication library that leverages CloudMatrix384's global shared memory to implement efficient point-to-point and all-to-all primitives. We also extend our serving engine FlowServe with system-level techniques, enabling scalable inference across hundreds of NPUs.
Problem

Research questions and friction points this paper is trying to address.

Running large MoE models on SuperPod hardware efficiently
Developing scalable execution models for disaggregated transformer architectures
Enabling high-performance communication across hundreds of NPUs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformerless disaggregated architecture for modular execution
XCCL communication library for efficient memory sharing
FlowServe engine for scalable NPU inference
🔎 Similar Papers
No similar papers found.
A
Ao Xiao
B
Bangzheng He
B
Baoquan Zhang
Baoxing Huai
Baoxing Huai
HuaweiCloud
NLPKnowledge Computing
B
Bingji Wang
B
Bo Wang
B
Bo Xu
B
Boyi Hou
C
Chan Yang
C
Changhong Liu
Cheng Cui
Cheng Cui
BUAA
deep learningnetwork designOCRmllm
C
Chenyu Zhu
C
Cong Feng
D
Daohui Wang
D
Dayun Lin
D
Duo Zhao
F
Fengshao Zou
F
Fu Wang
G
Gangqiang Zhang
G
Gengyuan Dan
G
Guanjie Chen
G
Guodong Guan
G
Guodong Yang
Haifeng Li
Haifeng Li
Central South University
GISRemote sensingMachine learningSparse represetationBrain Theory
H
Haipei Zhu