Orders in Chaos: Enhancing Large-Scale MoE LLM Serving with Data Movement Forecasting

📅 2025-10-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In large-scale Mixture-of-Experts (MoE) models, stochastic expert selection induces substantial cross-device data movement, becoming a critical bottleneck in multi-unit serving systems. Method: This work presents the first fine-grained empirical analysis on ultra-large MoE models (200B–671B parameters), generating 150 GB of trace data from over 24,000 real-world inference requests to characterize spatiotemporal patterns of data movement and distill six key insights. We propose a data-movement-centric modeling and simulation framework, and design a lightweight hardware-software co-optimization scheme requiring only minor architectural modifications to wafer-scale GPUs. Contribution/Results: Our approach achieves average speedups of 6.3× on DeepSeek-V3 and 4.0× on Qwen3, demonstrating both high efficiency and broad applicability across heterogeneous hardware platforms.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) with Mixture of Experts (MoE) architectures achieve remarkable performance improvements, but their random expert selection mechanism introduces significant data movement overhead that becomes the dominant bottleneck in multi-unit serving systems. To forecast the patterns underlying this data movement, we conduct comprehensive data-movement-centric profiling across three state-of-the-art large-scale MoE models (200B- 671B) using over 24,000 requests spanning diverse workloads. With the resulting 150GB+ trace files, we perform systematic analysis from both temporal and spatial perspectives and distill six key insights to guide the design of diverse future serving systems. Taking wafer-scale GPUs as a case study, we demonstrate that minor architectural modifications leveraging our insights achieve substantial performance gains, delivering 6.3X and 4.0X average speedups on DeepSeek V3 and Qwen3, respectively. Our work provides the first comprehensive data-centric analysis of MoE models at scale. Our profiling traces and analysis results are publicly available at {https://huggingface.co/datasets/core12345/MoE_expert_selection_trace. We will also release our simulation framework shortly to facilitate future research in this area.
Problem

Research questions and friction points this paper is trying to address.

Forecasting data movement patterns in MoE LLMs
Reducing data movement bottlenecks in multi-unit serving systems
Optimizing expert selection through comprehensive profiling analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Forecasting data movement patterns via profiling
Analyzing temporal and spatial expert selection insights
Modifying GPU architecture to accelerate MoE serving
🔎 Similar Papers
No similar papers found.