🤖 AI Summary
This work addresses the vulnerability of large-scale Mixture-of-Experts (MoE) language models to hardware failures during inference deployment, where conventional recovery via service restart incurs substantial latency due to model reloading and computation graph recompilation. The paper proposes ReviveMoE, the first approach enabling rapid fault recovery without restarting service instances, compatible with both centralized and decoupled MoE architectures. Built upon Huawei’s xDeepServe platform and XCCL communication library, ReviveMoE introduces a lightweight fault detection and state reconstruction mechanism that circumvents model reloading and graph recompilation. Integrated into Huawei Cloud’s Model-as-a-Service (MaaS) platform, this method significantly reduces recovery latency and enhances the availability and stability of large-scale MoE online inference services.
📝 Abstract
As LLM deployments scale over more hardware, the probability of a single failure in a system increases significantly, and cloud operators must consider robust countermeasures to handle these inevitable failures. A common recovery approach is to simply restart the LLM serving instance; however, this is costly in model-as-a-service (MaaS) inference settings, where reloading model weights and recompiling computation graphs can introduce significant delays to incoming requests. We propose ReviveMoE, a method for rapid failure recovery in large-scale LLM deployments without restarting the serving instance. ReviveMoE is designed to support both the traditional LLM architecture, which collocates MoE and attention on the same hardware, and the disaggregated architectures, which separate MoE from attention. Integrated into Huawei Cloud's MaaS, ReviveMoE is built on top of Huawei's xDeepServe serving platform and the XCCL communications library.