SpecMD: A Comprehensive Study On Speculative Expert Prefetching

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the absence of a standardized evaluation framework for expert caching in Mixture-of-Experts (MoE) models and the inefficacy of conventional caching policies—such as LRU and LFU—in capturing the predictability of expert access patterns. To this end, the authors propose SpecMD, the first benchmarking framework tailored for MoE caching, along with a novel replacement policy named Least-Stale that explicitly models expert freshness, thereby moving beyond traditional locality assumptions. Experimental results demonstrate that with only 5% (0.6 GB) of VRAM allocated to caching, the proposed approach achieves over 88% cache hit rate, reduces time-to-first-token (TTFT) latency by 34.7%, and decreases conflict misses by up to 85× compared to existing strategies.

Technology Category

Application Category

📝 Abstract
Mixture-of-Experts (MoE) models enable sparse expert activation, meaning that only a subset of the model's parameters is used during each inference. However, to translate this sparsity into practical performance, an expert caching mechanism is required. Previous works have proposed hardware-centric caching policies, but how these various caching policies interact with each other and different hardware specification remains poorly understood. To address this gap, we develop \textbf{SpecMD}, a standardized framework for benchmarking ad-hoc cache policies on various hardware configurations. Using SpecMD, we perform an exhaustive benchmarking of several MoE caching strategies, reproducing and extending prior approaches in controlled settings with realistic constraints. Our experiments reveal that MoE expert access is not consistent with temporal locality assumptions (e.g LRU, LFU). Motivated by this observation, we propose \textbf{Least-Stale}, a novel eviction policy that exploits MoE's predictable expert access patterns to reduce collision misses by up to $85\times$ over LRU. With such gains, we achieve over $88\%$ hit rates with up to $34.7\%$ Time-to-first-token (TTFT) reduction on OLMoE at only $5\%$ or $0.6GB$ of VRAM cache capacity.
Problem

Research questions and friction points this paper is trying to address.

Mixture-of-Experts
expert caching
cache policies
hardware configuration
temporal locality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Speculative Expert Prefetching
Mixture-of-Experts (MoE)
Expert Caching
Least-Stale
Cache Eviction Policy
🔎 Similar Papers
No similar papers found.