🤖 AI Summary
This work addresses the significant latency and throughput degradation in Mixture-of-Experts (MoE) inference caused by the coupling of uneven computational load and network congestion due to dynamic expert hotspot migration. To mitigate this, the authors propose a continuous lookahead pipelined architecture that jointly optimizes computation and communication. The approach employs a gating-initialized predictor to forecast expert access patterns, integrates a hardware-aware dynamic expert replication and token assignment solver, and introduces phased data transfer with an All-to-All communication decoupling mechanism. Under highly volatile workloads, the system reduces prefill latency by up to 1.32× and improves decoding throughput by up to 1.26×, substantially outperforming current state-of-the-art MoE inference systems.
📝 Abstract
Mixture-of-Experts models have become a dominant architecture for scaling Large Language Models by activating only a sparse subset of experts per token. However, latency-critical MoE inference faces a fundamental tension: while expert parallelism improves memory efficiency, it also amplifies execution stragglers. In real-world serving, continuous batching and diverse concurrent requests induce rapid semantic shifts, causing expert hotspots to migrate abruptly across GPUs and triggering the'double penalty'of coupled computational skew and network congestion. We propose PROBE, an inference system that co-balances computation and communication in real time. PROBE introduces Continuous Lookahead Pipelining, which proactively predicts, plans, and prefetches for upcoming layers while keeping all control overheads off the critical path. PROBE consists of: (1) a Gate-Initialized Lookahead Predictor that distills the target router to forecast next-layer expert activation with high fidelity; (2) a Hardware-Aware Balance Planning solver that jointly optimizes dynamic expert replication and token assignment under strict hiding-window constraints; and (3) a Phase-Locked Co-Scheduling policy that uses split-phase transmission to hide bandwidth-intensive expert transfers behind computation without contending with All-to-All collectives. Experiments show that PROBE reduces prefill latency by up to 1.32X and improves decoding throughput by up to 1.26X over state-of-the-art baselines, especially under extreme workload volatility.