🤖 AI Summary
This work addresses the vulnerability of large-scale Mixture-of-Experts (MoE) model inference to single-point failures, which often cause global service outages and loss of processing progress. To mitigate this, the authors propose Tarragon, a novel framework that decouples attention computation from expert computation into independent fault domains for the first time. Tarragon introduces a reconfigurable data path, asynchronous incremental KV cache checkpointing, shadow expert mechanisms, and a loosely synchronized execution model to enable fine-grained fault tolerance and per-request recovery. Experimental results demonstrate that Tarragon preserves baseline performance under fault-free conditions while reducing service downtime by 160–213× compared to MegaScale-Infer—from approximately 64 seconds down to 0.3–0.4 seconds—under node failures.
📝 Abstract
Mixture-of-Experts (MoE) models are increasingly used to serve LLMs at scale, but failures become common as deployment scale grows. Existing systems exhibit poor failure resilience: even a single worker failure triggers a coarse-grained, service-wide restart, discarding accumulated progress and halting the entire inference pipeline during recovery--an approach clearly ill-suited for latency-sensitive, LLM services. We present Tarragon, a resilient MoE inference framework that confines the failures impact to individual workers while allowing the rest of the pipeline to continue making forward progress. Tarragon exploits the natural separation between the attention and expert computation in MoE-based transformers, treating attention workers (AWs) and expert workers (EWs) as distinct failure domains. Tarragon introduces a reconfigurable datapath to mask failures by rerouting requests to healthy workers. On top of this datapath, Tarragon implements a self-healing mechanism that relaxes the tightly synchronized execution of existing MoE frameworks. For stateful AWs, Tarragon performs asynchronous, incremental KV cache checkpointing with per-request restoration, and for stateless EWs, it leverages residual GPU memory to deploy shadow experts. These together keep recovery cost and recomputation overhead extremely low. Our evaluation shows that, compared to state-of-the-art MegaScale-Infer, Tarragon reduces failure-induced stalls by 160-213x (from ~64 s down to 0.3-0.4 s) while preserving performance when no failures occur.