🤖 AI Summary
This work addresses the challenge of prolonged service outages—often up to ten minutes—in large language model (LLM) inference systems caused by hardware failures in hyperscale clusters. To mitigate this, the authors propose KevlarFlow, a novel fault-tolerant architecture that uniquely integrates decoupled model-parallel initialization, dynamic traffic rerouting, and background key-value (KV) cache replication. This combination enables millisecond-scale recovery upon node failure. Experimental results demonstrate that KevlarFlow reduces average recovery time by 20× compared to state-of-the-art systems, while decreasing average and p99 request latencies by 3.1× and 2.8×, respectively. Notably, it improves average and p99 time-to-first-token (TTFT) by up to 378.9× and 574.6× under failure conditions, all with negligible runtime overhead, thereby substantially enhancing the availability and responsiveness of LLM services in unreliable hardware environments.
📝 Abstract
Large Language Model (LLM) serving systems remain fundamentally fragile, where frequent hardware faults in hyperscale clusters trigger disproportionate service outages in the software stack. Current recovery mechanisms are prohibitively slow, often requiring up to 10 minutes to reinitialize resources and reload massive model weights. We introduce KevlarFlow, a fault tolerant serving architecture designed to bridge the gap between hardware unreliability and service availability. KevlarFlow leverages 1) decoupled model parallelism initialization, 2) dynamic traffic rerouting, and 3) background KV cache replication to maintain high throughput during partial failures. Our evaluation demonstrates that KevlarFlow reduces mean-time-to-recovery (MTTR) by 20x and, under failure conditions, improves average latency by 3.1x, 99th percentile (p99) latency by 2.8x, average time-to-first-token (TTFT) by 378.9x, and p99 TTFT by 574.6x with negligible runtime overhead in comparison to state-of-the-art LLM serving systems.