🤖 AI Summary
To address the low parallel efficiency and suboptimal Tensor Core utilization of irregular sparse computations—such as Mixture-of-Experts (MoE)—on GPUs, this paper proposes a novel execution paradigm that synergistically combines static batching with dynamic task mapping. It statically compiles a dense task graph during compilation, transforming dynamic sparse inference into a single-kernel execution; a lightweight runtime scheduler then enables fine-grained task mapping onto hardware resources. This approach achieves, for the first time, highly efficient, targeted Tensor Core computation for MoE inference, attaining 91% and 95% of peak throughput utilization on NVIDIA H800 and H20 GPUs, respectively—significantly outperforming existing dynamic batching methods. The core contribution is a pioneering “compiler–runtime” co-optimization framework, establishing a new paradigm for high-throughput deployment of sparse models on hardware accelerators.
📝 Abstract
It has long been a problem to arrange and execute irregular workloads on massively parallel devices. We propose a general framework for statically batching irregular workloads into a single kernel with a runtime task mapping mechanism on GPUs. We further apply this framework to Mixture-of-Experts (MoE) model inference and implement an optimized and efficient CUDA kernel. Our MoE kernel achieves up to 91% of the peak Tensor Core throughput on NVIDIA H800 GPU and 95% on NVIDIA H20 GPU.