Rethinking Latency Denial-of-Service: Attacking the LLM Serving Framework, Not the Model

📅 2026-02-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a novel latency-based denial-of-service attack against large language models (LLMs) that shifts the adversarial focus from the model’s algorithmic layer to the system’s scheduling layer. By exploiting the scheduler’s state transition mechanism, the attack first exhausts the global key-value cache during a “Fill” phase to induce head-of-line blocking, then triggers repeated preemptions in a “Squeeze” phase to amplify latency—all under black-box conditions. The approach integrates prompt engineering to control output length, memory-side-channel probing, and resource manipulation to circumvent inherent defenses such as continuous batching. Experimental results demonstrate that, compared to existing attacks, the proposed method increases time-to-first-token latency by 20–280×, per-token latency by 1.5–4×, and reduces attack cost by 30–40%.

Technology Category

Application Category

📝 Abstract
Large Language Models face an emerging and critical threat known as latency attacks. Because LLM inference is inherently expensive, even modest slowdowns can translate into substantial operating costs and severe availability risks. Recently, a growing body of research has focused on algorithmic complexity attacks by crafting inputs to trigger worst-case output lengths. However, we report a counter-intuitive finding that these algorithmic latency attacks are largely ineffective against modern LLM serving systems. We reveal that system-level optimization such as continuous batching provides a logical isolation to mitigate contagious latency impact on co-located users. To this end, in this paper, we shift the focus from the algorithm to the system layer, and introduce a new Fill and Squeeze attack strategy targeting the state transition of the scheduler."Fill"first exhausts the global KV cache to induce Head-of-Line blocking, while"Squeeze"forces the system into repetitive preemption. By manipulating output lengths using methods from simple plain-text prompts to more complex prompt engineering, and leveraging side-channel probing of memory status, we demonstrate that the attack can be orchestrated in a black-box setting with much less cost. Extensive evaluations indicate by up to 20-280x average slowdown on Time to First Token and 1.5-4x average slowdown on Time Per Output Token compared to existing attacks with 30-40% lower attack cost.
Problem

Research questions and friction points this paper is trying to address.

Latency Denial-of-Service
LLM Serving
System-level Attack
KV Cache
Scheduler
Innovation

Methods, ideas, or system contributions that make the work stand out.

latency denial-of-service
LLM serving system
Fill and Squeeze attack
KV cache exhaustion
scheduler preemption
🔎 Similar Papers
No similar papers found.