Janus: Disaggregating Attention and Experts for Scalable MoE Inference

📅 2025-12-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large Mixture-of-Experts (MoE) models face significant challenges in inference, including high resource demands, dynamic workloads, and poor scalability with excessive resource waste under conventional monolithic deployment. To address these issues, this work proposes the first heterogeneous decoupled deployment architecture that separately schedules the attention modules and expert modules onto distinct GPU subclusters, enabling on-demand elastic scaling and fine-grained resource management. We introduce a dynamic two-stage communication mechanism, a lightweight GPU kernel scheduler, a memory-aware fine-grained expert load-balancing algorithm, and a dynamic expert placement strategy. Experimental results demonstrate that, under strict token-level latency constraints, our approach achieves up to 3.9× higher per-GPU throughput compared to state-of-the-art systems, while substantially improving GPU resource utilization and system scalability.

Technology Category

Application Category

📝 Abstract
Large Mixture-of-Experts (MoE) model inference is challenging due to high resource demands and dynamic workloads. Existing solutions often deploy the entire model as a single monolithic unit, which applies a unified resource configuration to both attention and expert modules despite their different requirements, leading to limited scalability and resource inefficiency. In this paper, we propose Janus, a scalable MoE inference system that disaggregates attention and experts on separate GPU sub-clusters, enabling each module to be managed and scaled independently. Janus incorporates three key designs for efficient, disaggregated MoE inference. First, it proposes an adaptive two-phase communication scheme that exploits intra- and inter-node bandwidth hierarchies for low-latency data exchange. Second, motivated by the memory-bound nature of MoE modules, Janus introduces a lightweight scheduler and implements it as a GPU kernel to balance the number of activated experts across GPUs at minimal overhead, thereby reducing inference latency. Third, Janus performs fine-grained resource management to dynamically adjust expert placement and independently scale attention and MoE resources to improve overall efficiency. Evaluation shows Janus achieves up to 3.9 higher perGPU throughput than state-of-the-art systems while meeting per-token latency requirements.
Problem

Research questions and friction points this paper is trying to address.

Disaggregates attention and experts for scalable MoE inference
Introduces adaptive communication and lightweight GPU scheduling
Dynamically manages resources to improve throughput and latency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Disaggregates attention and experts on separate GPU clusters
Uses adaptive two-phase communication for low-latency exchange
Implements lightweight scheduler for balanced expert activation
🔎 Similar Papers
No similar papers found.