ThinkTrap: Denial-of-Service Attacks against Black-box LLM Services via Infinite Thinking

📅 2025-12-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) deployed in cloud services face emerging denial-of-service (DoS) threats, particularly against black-box, closed-source APIs where internal model parameters and gradients are inaccessible. Method: This paper proposes the first “infinite reasoning” DoS attack framework targeting such black-box LLMs. It maps discrete tokens into a low-dimensional sparse continuous embedding space and employs black-box optimization to discover ultra-short adversarial prompts that trigger excessively long or non-terminating autoregressive generation—inducing unbounded computational overhead with minimal query volume (<10 queries/minute). Contribution/Results: The work pioneers black-box DoS attacks exploiting chain-of-thought inflation; circumvents conventional resource-threshold defenses; and demonstrates practical efficacy across multiple commercial LLM services—reducing throughput to ≤1% and causing complete service outages. This reveals a critical, previously overlooked vulnerability in production LLM deployments.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have become foundational components in a wide range of applications, including natural language understanding and generation, embodied intelligence, and scientific discovery. As their computational requirements continue to grow, these models are increasingly deployed as cloud-based services, allowing users to access powerful LLMs via the Internet. However, this deployment model introduces a new class of threat: denial-of-service (DoS) attacks via unbounded reasoning, where adversaries craft specially designed inputs that cause the model to enter excessively long or infinite generation loops. These attacks can exhaust backend compute resources, degrading or denying service to legitimate users. To mitigate such risks, many LLM providers adopt a closed-source, black-box setting to obscure model internals. In this paper, we propose ThinkTrap, a novel input-space optimization framework for DoS attacks against LLM services even in black-box environments. The core idea of ThinkTrap is to first map discrete tokens into a continuous embedding space, then undertake efficient black-box optimization in a low-dimensional subspace exploiting input sparsity. The goal of this optimization is to identify adversarial prompts that induce extended or non-terminating generation across several state-of-the-art LLMs, achieving DoS with minimal token overhead. We evaluate the proposed attack across multiple commercial, closed-source LLM services. Our results demonstrate that, even far under the restrictive request frequency limits commonly enforced by these platforms, typically capped at ten requests per minute (10 RPM), the attack can degrade service throughput to as low as 1% of its original capacity, and in some cases, induce complete service failure.
Problem

Research questions and friction points this paper is trying to address.

Proposes ThinkTrap for DoS attacks on black-box LLM services
Optimizes prompts to cause infinite generation loops in LLMs
Reduces service throughput significantly under request rate limits
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimizes adversarial prompts via continuous embedding space mapping
Performs black-box optimization in low-dimensional sparse subspace
Induces extended generation loops to degrade service throughput
🔎 Similar Papers
No similar papers found.