🤖 AI Summary
This work addresses the challenges of low-batch Mixture-of-Experts (MoE) inference on edge devices, which suffers from limited on-chip memory, load imbalance, and frequent off-chip memory accesses. To overcome these limitations, the authors propose a Fully Sharded Expert Data Parallelism (FSE-DP) paradigm tailored for multi-chiplet accelerators. This approach integrates a novel dynamic expert trajectory scheduling mechanism with hardware-friendly virtualization rules, enabling fine-grained dynamic scheduling of expert streams over high-bandwidth chiplet interconnects. The design effectively overlaps computation and communication while achieving balanced workload distribution. Evaluated on multi-chiplet architectures, the proposed method achieves 1.22–2.00× speedup over the state-of-the-art baseline and reduces on-chip memory usage by up to 78.8%.
📝 Abstract
Mixture-of-Experts is a promising approach for edge AI with low-batch inference. Yet, on-device deployments often face limited on-chip memory and severe workload imbalance; the prevalent use of offloading further incurs off-chip memory access bottlenecks. Moreover, MoE sparsity and dynamic gating shift distributed strategies toward much finer granularity and introduce runtime scheduling considerations. Recently, high die-to-die bandwidth chiplet interconnects have created new opportunities for multi-chiplet systems to address workload imbalance and offloading bottlenecks with fine-grained scheduling. In this paper, we propose Fully Sharded Expert Data Parallelism, a parallelization paradigm specifically architected for low-batch MoE inference on multi-chiplet accelerators. FSE-DP attains adaptive computation-communication overlap and balanced load by orchestrating fine-grained, complementary expert streams along dynamic trajectories across high-bandwidth D2D links. The attendant dataflow complexity is tamed by a minimal, hardware-amenable set of virtualization rules and a lightweight scheduling algorithm. Our approach achieves 1.22 to 2.00 times speedup over state-of-the-art baselines and saves up to 78.8 percent on-chip memory.