MoE-PHDS: One MoE checkpoint for flexible runtime sparsity

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Fixed top-k sparsity in sparse Mixture-of-Experts (MoE) models necessitates training multiple specialized models for distinct efficiency targets, substantially increasing deployment complexity and cost. This work proposes PHDS—a novel method that, for the first time, treats global sparsity as a tunable service primitive: it enables dynamic adjustment of the top-k value at inference time without architectural modifications, allowing a single checkpoint to support multiple operating points. PHDS employs a lightweight supervised fine-tuning (SFT) strategy combining cross-sparsity mixed training and short-term high-sparsity curriculum learning, ensuring full compatibility with existing MoE architectures. Extensive evaluation across major MoE models demonstrates that PHDS achieves Pareto-optimal or competitive accuracy–latency trade-offs compared to dedicated models, improves cross-sparsity consistency by up to 22%, and significantly enhances deployment flexibility and energy-efficiency adaptability.

Technology Category

Application Category

📝 Abstract
Sparse Mixtures of Experts (MoEs) are typically trained to operate at a fixed sparsity level, e.g. $k$ in a top-$k$ gating function. This global sparsity level determines an operating point on the accuracy/latency curve; currently, meeting multiple efficiency targets means training and maintaining multiple models. This practice complicates serving, increases training and maintenance costs, and limits flexibility in meeting diverse latency, efficiency, and energy requirements. We show that pretrained MoEs are more robust to runtime sparsity shifts than commonly assumed, and introduce MoE-PHDS ({f P}ost {f H}oc {f D}eclared {f S}parsity), a lightweight SFT method that turns a single checkpoint into a global sparsity control surface. PHDS mixes training across sparsity levels and anchors with a short curriculum at high sparsity, requiring no architectural changes. The result is predictable accuracy/latency tradeoffs from one model: practitioners can ``dial $k$'' at inference time without swapping checkpoints, changing architecture, or relying on token-level heuristics. Experiments on OLMoE-1B-7B-0125, Qwen1.5-MoE-A2.7B, and proprietary models fit on multiple operating points show that PHDS matches or exceeds well-specified oracle models, improves cross-sparsity agreement by up to 22% vs. well-specified oracle models, and enables simplified, flexible runtime MoE deployment by making global sparsity a first-class serving primitive.
Problem

Research questions and friction points this paper is trying to address.

Training multiple sparse MoE models for different efficiency targets
Enabling flexible runtime sparsity control from a single checkpoint
Reducing training costs while maintaining accuracy-latency tradeoffs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Single checkpoint enables flexible runtime sparsity control
Lightweight fine-tuning method without architectural modifications
Post-training adaptation for multiple accuracy-latency operating points
🔎 Similar Papers
No similar papers found.