MoEless: Efficient MoE LLM Serving via Serverless Computing

📅 2026-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high latency and cost in Mixture-of-Experts (MoE) large language model inference caused by imbalanced expert utilization, a challenge exacerbated by existing approaches that rely on static resource allocation lacking elasticity. To overcome this limitation, we propose the first serverless-based MoE inference framework, which introduces serverless computing into MoE inference for the first time. Our approach employs a layer-aware lightweight predictor to dynamically forecast expert load and integrates dynamic scaling of expert functions with locality-aware optimization to enable efficient scheduling and GPU resource utilization. Experimental results on an 8-GPU cluster using Megatron-LM demonstrate that our framework reduces inference latency by 43% and cuts inference cost by 84% compared to the state-of-the-art method.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have become a cornerstone of AI, driving progress across diverse domains such as content creation, search and recommendation systems, and AI-assisted workflows. To alleviate extreme training costs and advancing model scales, Mixture-of-Experts (MoE) has become a popular backbone for modern LLMs, which are commonly served in distributed deployment using expert parallelism (EP). However, MoE's sparse activation mechanism leads to severe expert load imbalance, where a few experts become overloaded while others remain idle, resulting in expert stragglers that inflate inference latency and serving cost. Existing expert load balancing solutions assume static resource configurations on serverful infrastructures, limiting expert scalability and elasticity, and resulting in either costly real-time expert swapping or degraded generation quality. We present MoEless, the first serverless MoE serving framework that mitigates expert load imbalance and accelerates inference via serverless experts. MoEless employs lightweight, layer-aware predictors to accurately estimate incoming expert load distributions and proactively identify stragglers. We design optimized expert scaling and placement strategies to maximize function locality, improve GPU utilization, and balance loads across experts and GPUs. MoEless is prototyped on top of Megatron-LM and deployed on an eight-GPU testbed. Experiments with open-source MoE models and real-world workloads show that MoEless reduces inference latency by 43% and inference cost by 84% compared to state-of-the-art solutions.
Problem

Research questions and friction points this paper is trying to address.

Mixture-of-Experts
load imbalance
LLM serving
inference latency
serverless computing
Innovation

Methods, ideas, or system contributions that make the work stand out.

serverless computing
Mixture-of-Experts (MoE)
expert load balancing
dynamic expert scaling
LLM inference optimization
🔎 Similar Papers
No similar papers found.