MixServe: An Automatic Distributed Serving System for MoE Models with Hybrid Parallelism Based on Fused Communication Algorithm

📅 2026-01-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of cross-node communication bottlenecks and load imbalance in expert-parallel execution that commonly hinder distributed deployment of Mixture-of-Experts (MoE) models. The authors propose an automated distributed inference system featuring a novel tensor-expert hybrid parallelism (TP-EP) mechanism that uniquely integrates All-Reduce and All-to-All communication patterns. Coupled with an automatic parallelism strategy search, the system dynamically selects the optimal deployment configuration based on model architecture and hardware topology. Experiments on DeepSeek-R1 and Qwen3 demonstrate that the proposed approach reduces time-to-first-token (TTFT) by 1.08–3.80×, decreases inter-token latency (ITL) by 1.03–1.66×, and improves throughput by 5.2%–50.3% compared to existing baselines.

Technology Category

Application Category

📝 Abstract
The Mixture of Experts (MoE) models are emerging as the latest paradigm for Large Language Models (LLMs). However, due to memory constraints, MoE models with billions or even trillions of parameters can only be deployed in multi-GPU or even multi-node&multi-GPU based serving systems. Thus, communication has became a major bottleneck in distributed serving systems, especially inter-node communication. Contemporary distributed MoE models are primarily implemented using all-reduce (AR) based tensor parallelism (TP) and all-to-all (A2A) based expert parallelism (EP). However, TP generally exhibits low inter-node efficiency and is thus confined to high-speed intra-node bandwidth. In contrast, EP tends to suffer from load imbalance, especially when the parallel degree is high. In this work, we introduce MixServe, a novel automatic distributed serving system for efficient deployment of MoE models by a novel TP-EP hybrid parallelism based on fused AR-A2A communication algorithm. MixServe begins by evaluating the communication overhead associated with various parallel strategies, taking into account the model hyperparameters and the configurations of network and hardware resources, and then automatically selects the most efficient parallel strategy. Then, we propose the TP-EP hybrid parallelism based on fused AR-A2A communication algorithm that overlaps intra-node AR communication and inter-node A2A communication. Extensive experiments on DeepSeek-R1 and Qwen3 models demonstrate that MixServe achieves superior inference performance, with 1.08~3.80x acceleration in time to first token (TTFT), 1.03~1.66x acceleration in inter-token latency (ITL), and 5.2%~50.3% throughput improvement compared to existing approaches.
Problem

Research questions and friction points this paper is trying to address.

Mixture of Experts
distributed serving
communication bottleneck
tensor parallelism
expert parallelism
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixture of Experts
Hybrid Parallelism
Fused Communication
Distributed Serving
Automatic Parallel Strategy
🔎 Similar Papers
No similar papers found.