Towards Resiliency in Large Language Model Serving with KevlarFlow

📅 2026-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of prolonged service outages—often up to ten minutes—in large language model (LLM) inference systems caused by hardware failures in hyperscale clusters. To mitigate this, the authors propose KevlarFlow, a novel fault-tolerant architecture that uniquely integrates decoupled model-parallel initialization, dynamic traffic rerouting, and background key-value (KV) cache replication. This combination enables millisecond-scale recovery upon node failure. Experimental results demonstrate that KevlarFlow reduces average recovery time by 20× compared to state-of-the-art systems, while decreasing average and p99 request latencies by 3.1× and 2.8×, respectively. Notably, it improves average and p99 time-to-first-token (TTFT) by up to 378.9× and 574.6× under failure conditions, all with negligible runtime overhead, thereby substantially enhancing the availability and responsiveness of LLM services in unreliable hardware environments.

Technology Category

Application Category

📝 Abstract
Large Language Model (LLM) serving systems remain fundamentally fragile, where frequent hardware faults in hyperscale clusters trigger disproportionate service outages in the software stack. Current recovery mechanisms are prohibitively slow, often requiring up to 10 minutes to reinitialize resources and reload massive model weights. We introduce KevlarFlow, a fault tolerant serving architecture designed to bridge the gap between hardware unreliability and service availability. KevlarFlow leverages 1) decoupled model parallelism initialization, 2) dynamic traffic rerouting, and 3) background KV cache replication to maintain high throughput during partial failures. Our evaluation demonstrates that KevlarFlow reduces mean-time-to-recovery (MTTR) by 20x and, under failure conditions, improves average latency by 3.1x, 99th percentile (p99) latency by 2.8x, average time-to-first-token (TTFT) by 378.9x, and p99 TTFT by 574.6x with negligible runtime overhead in comparison to state-of-the-art LLM serving systems.
Problem

Research questions and friction points this paper is trying to address.

Large Language Model
fault tolerance
service availability
hardware faults
recovery time
Innovation

Methods, ideas, or system contributions that make the work stand out.

fault tolerance
model parallelism
KV cache replication
dynamic traffic rerouting
LLM serving
🔎 Similar Papers
No similar papers found.
Shangshu Qian
Shangshu Qian
Purdue University
K
Kipling Liu
Department of Computer Science, Purdue University, West Lafayette, IN, United States
P
P. C. Sruthi
Department of Computer Science, Purdue University, West Lafayette, IN, United States
Lin Tan
Lin Tan
Mary J. Elmore New Frontiers Professor, Computer Science, Purdue University
LLM4CodeSoftware reliabilityAIText analyticsAutoformalization
Yongle Zhang
Yongle Zhang
Purdue University
Software systemsreliabilitydebugging