π€ AI Summary
To address severe challenges in compute capacity, memory bandwidth, and communication latency for large-model inference services, this paper introduces CloudMatrix384βan end-to-end AI infrastructure super-nodeβand CloudMatrix-Infer, its dedicated inference system. Methodologically, it proposes a novel peer-to-peer service architecture enabling independent, elastic scaling across prefill, decode, and KV-cache paths; pioneers an EP320-level expert-parallelism strategy; and establishes a hardware-aware optimization framework driven by a Unified Bus (UB) interconnect network. Built on an Ascend 910C+NPU/Kunpeng CPU heterogeneous cluster, it integrates full-mesh UB interconnection, INT8 quantization, micro-batch pipelined scheduling, and custom operators. Evaluation on DeepSeek-R1 shows 6,688 tokens/s/NPU (prefill) and 1,943 tokens/s/NPU (decode, TPOT <50 ms); under stringent 15-ms latency constraints, it sustains 538 tokens/s with zero accuracy degradation.
π Abstract
The rapid evolution of large language models (LLMs), driven by growing parameter scales, adoption of mixture-of-experts (MoE) architectures, and expanding context lengths, imposes unprecedented demands on AI infrastructure. Traditional AI clusters face limitations in compute intensity, memory bandwidth, inter-chip communication, and latency, compounded by variable workloads and strict service-level objectives. Addressing these issues requires fundamentally redesigned hardware-software integration. This paper introduces Huawei CloudMatrix, a next-generation AI datacenter architecture, realized in the production-grade CloudMatrix384 supernode. It integrates 384 Ascend 910C NPUs and 192 Kunpeng CPUs interconnected via an ultra-high-bandwidth Unified Bus (UB) network, enabling direct all-to-all communication and dynamic pooling of resources. These features optimize performance for communication-intensive operations, such as large-scale MoE expert parallelism and distributed key-value cache access. To fully leverage CloudMatrix384, we propose CloudMatrix-Infer, an advanced LLM serving solution incorporating three core innovations: a peer-to-peer serving architecture that independently scales prefill, decode, and caching; a large-scale expert parallelism strategy supporting EP320 via efficient UB-based token dispatch; and hardware-aware optimizations including specialized operators, microbatch-based pipelining, and INT8 quantization. Evaluation with the DeepSeek-R1 model shows CloudMatrix-Infer achieves state-of-the-art efficiency: prefill throughput of 6,688 tokens/s per NPU and decode throughput of 1,943 tokens/s per NPU (<50 ms TPOT). It effectively balances throughput and latency, sustaining 538 tokens/s even under stringent 15 ms latency constraints, while INT8 quantization maintains model accuracy across benchmarks.