Beyond Max Tokens: Stealthy Resource Amplification via Tool Calling Chains in LLM Agents

πŸ“… 2026-01-16
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work proposes a stealthy, economically motivated denial-of-service attack against large language model (LLM) agents by shifting the attack surface from user prompts or RAG contexts to the agent–tool interaction layer. By fine-tuning compliant tool servers to subtly manipulate text fields and response policies, the method induces agents to generate excessively long yet semantically valid tool-call chains. This amplifies computational costs without violating per-turn token limits. Experiments across six mainstream LLMs demonstrate that the approach can extend task trajectories beyond 60,000 tokens, increasing monetary cost by up to 658Γ—, energy consumption by 100–560Γ—, and GPU KV cache utilization to 35–74%, while reducing co-located task throughput by approximately 50%.

Technology Category

Application Category

πŸ“ Abstract
The agent-tool communication loop is a critical attack surface in modern Large Language Model (LLM) agents. Existing Denial-of-Service (DoS) attacks, primarily triggered via user prompts or injected retrieval-augmented generation (RAG) context, are ineffective for this new paradigm. They are fundamentally single-turn and often lack a task-oriented approach, making them conspicuous in goal-oriented workflows and unable to exploit the compounding costs of multi-turn agent-tool interactions. We introduce a stealthy, multi-turn economic DoS attack that operates at the tool layer under the guise of a correctly completed task. Our method adjusts text-visible fields and a template-governed return policy in a benign, Model Context Protocol (MCP)-compatible tool server, optimizing these edits with a Monte Carlo Tree Search (MCTS) optimizer. These adjustments leave function signatures unchanged and preserve the final payload, steering the agent into prolonged, verbose tool-calling sequences using text-only notices. This compounds costs across turns, escaping single-turn caps while keeping the final answer correct to evade validation. Across six LLMs on the ToolBench and BFCL benchmarks, our attack expands tasks into trajectories exceeding 60,000 tokens, inflates costs by up to 658x, and raises energy by 100-560x. It drives GPU KV cache occupancy from<1% to 35-74% and cuts co-running throughput by approximately 50%. Because the server remains protocol-compatible and task outcomes are correct, conventional checks fail. These results elevate the agent-tool interface to a first-class security frontier, demanding a paradigm shift from validating final answers to monitoring the economic and computational cost of the entire agentic process.
Problem

Research questions and friction points this paper is trying to address.

resource amplification
tool calling chains
LLM agents
economic DoS
stealthy attack
Innovation

Methods, ideas, or system contributions that make the work stand out.

tool calling chains
economic DoS attack
Monte Carlo Tree Search
LLM agents
resource amplification
πŸ”Ž Similar Papers
No similar papers found.